CN108733065B - Obstacle avoidance method and device for robot and robot - Google Patents

Obstacle avoidance method and device for robot and robot Download PDF

Info

Publication number
CN108733065B
CN108733065B CN201710912218.8A CN201710912218A CN108733065B CN 108733065 B CN108733065 B CN 108733065B CN 201710912218 A CN201710912218 A CN 201710912218A CN 108733065 B CN108733065 B CN 108733065B
Authority
CN
China
Prior art keywords
robot
obstacle
model
posture
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710912218.8A
Other languages
Chinese (zh)
Other versions
CN108733065A (en
Inventor
王雪松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Orion Star Technology Co Ltd
Original Assignee
Beijing Orion Star Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Orion Star Technology Co Ltd filed Critical Beijing Orion Star Technology Co Ltd
Priority to CN201710912218.8A priority Critical patent/CN108733065B/en
Publication of CN108733065A publication Critical patent/CN108733065A/en
Application granted granted Critical
Publication of CN108733065B publication Critical patent/CN108733065B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/08Control of attitude, i.e. control of roll, pitch, or yaw

Abstract

The embodiment of the invention provides an obstacle avoidance method and device for a robot and the robot, wherein the method comprises the following steps: acquiring structural information, current pose information and obstacle information of the robot; constructing a three-dimensional robot model according to the structural information and the current pose information, and constructing an obstacle three-dimensional model according to the obstacle information; splitting the three-dimensional model of the robot into a plurality of sub-models according to a preset splitting rule; respectively comparing the plurality of sub models with the three-dimensional model of the obstacle, and determining whether the robot can avoid the obstacle when traveling in the current posture; and if so, controlling the robot to travel at the current posture. Therefore, the robot three-dimensional model is divided into the plurality of sub models, the point cloud coordinates of the obstacles do not need to be calculated, only a simple obstacle three-dimensional model needs to be constructed, the plurality of sub models and the obstacle three-dimensional model are compared, the calculated amount is small, the robot reaction speed is greatly improved, and the user experience is good.

Description

Obstacle avoidance method and device for robot and robot
Technical Field
The invention relates to the technical field of robot control, in particular to an obstacle avoidance method and device for a robot and the robot.
Background
In recent years, along with the rapid development of AI (Artificial Intelligence) technology, the application of intelligent service robots is becoming more and more extensive, wherein the intelligent service robots may include a floor sweeping robot, a carrying robot, a chatting robot, and the like, and have a very broad market prospect.
In the moving process of the robots, in order to avoid collision with surrounding obstacles, the obstacles need to be avoided. Generally, the robot can be provided with various sensors, including a laser radar, a depth camera, an infrared sensor and the like, and data detected by the sensors are utilized to perform multi-sensor data fusion, accurately calculate point cloud coordinates of all obstacles in a 3D space, and further determine whether the robot can successfully avoid the obstacles when the robot travels in the current posture according to the position and the posture of the robot.
However, due to the fact that the number of point cloud coordinates of all obstacles in a 3D space is quite large, the calculated amount of the obstacle avoidance method is very large, the calculation resources cannot meet the demand, and further the robot cannot make a decision for a long time in the obstacle avoidance process, the reaction speed is slow, and the user experience is very poor.
Disclosure of Invention
The embodiment of the invention aims to provide an obstacle avoidance method and device for a robot and the robot, so as to reduce the calculated amount of the robot in obstacle avoidance and improve the reaction speed of the robot. The specific technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides an obstacle avoidance method for a robot, where the method includes:
acquiring structural information, current pose information and obstacle information of the robot;
constructing a three-dimensional robot model according to the structural information and the current pose information, and constructing an obstacle three-dimensional model according to the obstacle information;
splitting the three-dimensional robot model into a plurality of sub-models according to a preset splitting rule;
respectively comparing the plurality of sub models with the three-dimensional model of the obstacle, and determining whether the robot can avoid the obstacle when traveling at the current posture;
and if so, controlling the robot to travel at the current posture.
Optionally, the step of splitting the three-dimensional robot model into a plurality of submodels according to a preset splitting rule includes:
and splitting the three-dimensional model of the robot according to the current pose information to obtain a plurality of sub-models.
Optionally, the step of comparing the plurality of sub models with the three-dimensional model of the obstacle to determine whether the robot travels in the current posture to avoid the obstacle includes:
determining a plurality of obstacle avoidance planes according to the spatial position of the sub-model;
respectively determining a projection area of each sub-model on a corresponding target obstacle avoidance plane, wherein the target obstacle avoidance plane corresponding to the sub-model is as follows: in a preset direction, an obstacle avoidance plane closest to the sub-model;
determining a projection area of the barrier three-dimensional model on each obstacle avoidance plane;
determining whether a projection area of the sub-model on each obstacle avoidance plane and a projection area of an obstacle overlap in a traveling direction of the robot;
if not, determining that the robot can avoid the obstacle when traveling at the current posture.
Optionally, the step of comparing the plurality of sub models with the three-dimensional model of the obstacle to determine whether the robot travels in the current posture to avoid the obstacle includes:
and comparing the coordinates of the target point of each submodel with the coordinates of the three-dimensional model of the obstacle to determine whether the robot can avoid the obstacle when travelling in the current posture, wherein the target point of each submodel is used for representing the outer contour of the submodel.
Optionally, in a case where it is determined that the robot cannot avoid the obstacle while traveling at the current posture, the method further includes:
adjusting the posture of the robot;
controlling the robot to travel at the adjusted pose.
Optionally, the step of adjusting the posture of the robot includes:
comparing a pre-stored pose model with the barrier three-dimensional model to determine a target pose model, wherein the target pose model is as follows: the robot travels in the corresponding posture to avoid the pose model of the obstacle;
and adjusting the posture of the robot to a posture corresponding to the target pose model.
Optionally, the step of adjusting the posture of the robot includes:
calculating a target posture according to the current posture and the three-dimensional model of the obstacle, wherein the target posture is as follows: the robot can avoid the attitude corresponding to the obstacle when traveling in the current direction;
and adjusting the posture of the robot to the target posture.
In a second aspect, an embodiment of the present invention provides an obstacle avoidance device for a robot, where the obstacle avoidance device includes:
the information acquisition module is used for acquiring the structural information, the current pose information and the obstacle information of the robot;
the model building module is used for building a three-dimensional robot model according to the structural information and the current pose information and building a three-dimensional obstacle model according to the obstacle information;
the model splitting module is used for splitting the three-dimensional robot model into a plurality of sub models according to a preset splitting rule;
the obstacle avoidance determining module is used for respectively comparing the plurality of sub models with the obstacle three-dimensional model and determining whether the robot can avoid the obstacle when travelling in the current posture;
and the traveling control module is used for controlling the robot to travel at the current posture when the robot is determined to travel at the current posture to avoid the obstacle.
Optionally, the model splitting module includes:
and the model splitting unit is used for splitting the three-dimensional model of the robot according to the current pose information to obtain a plurality of sub models.
Optionally, the obstacle avoidance determining module includes:
the obstacle avoidance plane determining unit is used for determining a plurality of obstacle avoidance planes according to the space position of the sub model;
a first projection area determining unit, configured to determine a projection area of each sub-model on a corresponding target obstacle avoidance plane, where the target obstacle avoidance plane corresponding to the sub-model is: in a preset direction, an obstacle avoidance plane closest to the sub-model;
the second projection area determining unit is used for determining the projection area of the obstacle three-dimensional model on each obstacle avoidance plane;
an overlap area determination unit for determining whether a projection area of the sub-model on each obstacle avoidance plane overlaps with a projection area of the obstacle in the traveling direction of the robot;
and the first obstacle avoidance determining unit is used for determining that the robot can avoid the obstacle when the robot travels in the current posture when the projection area of the sub model on each obstacle avoidance plane is not overlapped with the projection area of the obstacle.
Optionally, the obstacle avoidance determining module includes:
and the second obstacle avoidance determining unit is used for comparing the coordinates of the target point of each sub-model with the coordinates of the three-dimensional obstacle model and determining whether the robot can avoid the obstacle when travelling in the current posture, wherein the target point of each sub-model is used for representing the outer contour of the sub-model.
Optionally, the apparatus further comprises:
the attitude adjusting module is used for adjusting the attitude of the robot under the condition that the robot is determined to travel at the current attitude and can not avoid the obstacle;
and the attitude control module is used for controlling the robot to advance in the adjusted attitude.
Optionally, the posture adjustment module includes:
a pose determining unit, configured to compare a pre-stored pose model with the three-dimensional model of the obstacle, and determine a target pose model, where the target pose model is: the robot travels in the corresponding posture to avoid the pose model of the obstacle;
and the first posture adjusting unit is used for adjusting the posture of the robot to the posture corresponding to the target posture model.
Optionally, the posture adjustment module includes:
a target posture determining unit, configured to calculate a target posture according to the current posture and the three-dimensional model of the obstacle, where the target posture is: the robot can avoid the attitude corresponding to the obstacle when traveling in the current direction;
and the second posture adjusting unit is used for adjusting the posture of the robot to the target posture.
In a third aspect, an embodiment of the present invention further provides a robot, including a processor, a communication interface, a memory, and a communication bus, where the processor and the communication interface complete communication between the processor and the memory through the communication bus;
a memory for storing a computer program;
and the processor is used for realizing the steps of the obstacle avoidance method of the robot when executing the program stored in the memory.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the obstacle avoidance method for the robot are implemented.
According to the scheme provided by the embodiment of the invention, the structure information, the current pose information and the obstacle information of the robot are firstly obtained, then the three-dimensional model of the robot is constructed according to the structure information and the current pose information, the three-dimensional model of the obstacle is constructed according to the obstacle information, the three-dimensional model of the robot is divided into a plurality of sub models according to the preset dividing rule, then the plurality of sub models are respectively compared with the three-dimensional model of the obstacle, whether the robot can avoid the obstacle when the robot advances in the current posture is determined, and if so, the robot is controlled to advance in the current posture. The method of splitting the three-dimensional model of the robot into the plurality of sub-models is adopted, the point cloud coordinates of the obstacles do not need to be calculated, only the simple three-dimensional model of the obstacles needs to be constructed, the plurality of sub-models and the three-dimensional model of the obstacles are compared, the calculated amount is small, the reaction speed of the robot is greatly improved, and the user experience is good.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of an obstacle avoidance method for a robot according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of the three-dimensional model of the robot and the three-dimensional model of the obstacle in the embodiment shown in FIG. 1;
FIG. 3 is a schematic diagram of a three-dimensional model of a robot in the embodiment of FIG. 1 in a split mode;
FIG. 4 is a detailed flowchart of step S104 in the embodiment shown in FIG. 1;
FIG. 5 is a schematic view of an obstacle avoidance plane in the embodiment of FIG. 4;
FIG. 6 is another schematic view of the obstacle avoidance plane of the embodiment shown in FIG. 4;
FIG. 7 is a schematic diagram of a three-dimensional model of an obstacle shown in FIG. 4 in a split mode;
FIG. 8(a) is a schematic view of the projection area in the embodiment of FIG. 4;
FIG. 8(b) is another schematic view of the projection area in the embodiment of FIG. 4;
fig. 9 is a schematic structural diagram of an obstacle avoidance device of a robot according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of a robot according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to reduce the calculated amount of a robot in obstacle avoidance and improve the reaction speed of the robot, the embodiment of the invention provides an obstacle avoidance method and device of the robot, the robot and a computer readable storage medium.
First, an obstacle avoidance method for a robot according to an embodiment of the present invention is described below.
The obstacle avoidance method for the robot according to the embodiment of the present invention may be applied to the robot itself, or may be applied to a control device, such as a computer, a mobile phone, etc., which establishes a communication connection with the robot and is used for controlling the robot to travel.
As shown in fig. 1, an obstacle avoidance method for a robot includes:
s101, acquiring structural information, current pose information and obstacle information of the robot;
s102, constructing a three-dimensional robot model according to the structure information and the current pose information, and constructing an obstacle three-dimensional model according to the obstacle information;
s103, splitting the three-dimensional model of the robot into a plurality of sub-models according to a preset splitting rule;
s104, respectively comparing the plurality of sub models with the obstacle three-dimensional model, and determining whether the robot can avoid the obstacle when traveling in the current posture;
and S105, if so, controlling the robot to move in the current posture.
It can be seen that in the scheme provided by the embodiment of the present invention, the electronic device first obtains the structure information, the current pose information, and the obstacle information of the robot, then constructs a three-dimensional robot model according to the structure information and the current pose information, constructs a three-dimensional obstacle model according to the obstacle information, splits the three-dimensional robot model into a plurality of submodels according to a preset splitting rule, then compares the plurality of submodels with the three-dimensional obstacle model, and determines whether the robot can avoid the obstacle when traveling in the current posture, and if so, controls the robot to travel in the current posture. The method of splitting the three-dimensional model of the robot into the plurality of sub-models is adopted, the point cloud coordinates of the obstacles do not need to be calculated, only the simple three-dimensional model of the obstacles needs to be constructed, the plurality of sub-models and the three-dimensional model of the obstacles are compared, the calculated amount is small, the reaction speed of the robot is greatly improved, and the user experience is good.
It is understood that in the embodiment of the present invention, as long as the information such as the type and model of the robot is determined, the structural information of the robot may be determined, wherein the structural information may include information such as the shapes and sizes of the various components included in the robot and the various components.
The current pose information of the robot may include a current position and posture of the robot, and the posture may include information on a position, an angle, and the like of a movable part of the robot in a three-dimensional space. For example, if the robot has a movable part of an arm, a head, etc., the pose may include information of a height, an angle of the arm, an angle of the head, etc. For another example, if the robot is a handling robot having a tray, the attitude may include information such as the height of the tray.
The obstacle information of the obstacle is information such as a shape, a size, and a position of an obstacle that may block the robot in the current traveling direction of the robot, and is not particularly limited as long as the obstacle information can represent an approximate outline and a position of the obstacle. For example, if the obstacle is a table, the obstacle information acquired by the electronic device may include information such as the height and width of the table and the position of the table.
In one embodiment, it is reasonable that the obstacle information may be obtained by a sensor mounted to the robot, or by a sensor provided at some fixed location in the environment. The sensor may include a laser radar, a depth sensor, etc., as long as obstacle information of an obstacle can be obtained, and is not particularly limited herein.
In step S101, if the electronic device is the robot itself, the robot may acquire its own structure information and current pose information, and acquire the obstacle information by means of a sensor or the like. If the electronic equipment is control equipment which is in communication connection with the robot, the electronic equipment can acquire the structural information and the current pose information of the robot through the communication connection, and acquire the obstacle information detected by the sensor through the communication connection with the sensor and the like.
Then, the electronic device may construct a three-dimensional robot model according to the acquired structural information and current pose information of the robot, and construct a three-dimensional obstacle model according to the obstacle information. Specifically, the electronic device may construct a three-dimensional robot model and a three-dimensional obstacle model by a stereo modeling algorithm or the like. The three-dimensional model of the robot and the three-dimensional model of the obstacle may be constructed by any one of the mathematical three-dimensional modeling techniques, which are not specifically limited and described herein.
In order to further reduce the calculated amount and facilitate the subsequent processing such as splitting of the three-dimensional robot model, the three-dimensional robot model and the three-dimensional obstacle model constructed by the electronic equipment can be three-dimensional models formed by simple geometric bodies, and the three-dimensional models completely conforming to the actual shapes of the robot and the obstacle do not need to be constructed, so long as the shapes and the sizes of the robot and the obstacle can be roughly represented, therefore, the calculated amount can be further reduced, and the processing speed is higher.
For example, the electronic device may construct a three-dimensional robot model composed of simple geometric bodies such as a rectangular parallelepiped and a cylinder according to the structural information and the current pose information of the robot, and similarly, the electronic device may also construct a three-dimensional obstacle model composed of simple geometric bodies such as a rectangular parallelepiped and a cylinder according to the obstacle information. As shown in fig. 2, for the robot 10, the electronic device may build a robot three-dimensional model 110, and for the obstacle 20, the electronic device may build an obstacle three-dimensional model 210. It can be seen that the robot three-dimensional model 110 and the obstacle three-dimensional model 210 are both three-dimensional models composed of cuboids or cylinders. In order to clearly and concisely represent the three-dimensional robot model and the three-dimensional obstacle model, fig. 2 shows the projection of the three-dimensional robot model and the three-dimensional obstacle model onto a fixed plane, which may be a plane perpendicular to the ground.
In step S103, the electronic device may split the three-dimensional robot model into a plurality of sub-models according to a preset splitting rule. Furthermore, the plurality of sub models can be respectively compared with the three-dimensional model of the obstacle to determine whether the robot can avoid the obstacle when traveling in the current posture. For clarity of the scheme and clarity of layout, specific splitting modes and specific modes for determining whether the robot can avoid the obstacle when traveling in the current posture will be described in the following.
If the electronic device determines that the robot is able to avoid the obstacle while traveling in the current pose, the robot may be controlled to travel in the current pose. If the electronic device determines that the robot cannot avoid the obstacle when traveling in the current posture, the traveling direction of the robot may be changed, or the posture of the robot may be adjusted so that the robot may avoid the obstacle.
As an implementation manner of the embodiment of the present invention, the step of splitting the three-dimensional robot model into a plurality of sub-models according to the preset splitting rule may include:
and splitting the three-dimensional model of the robot according to the current pose information to obtain a plurality of sub-models.
Because the current pose information of the robot may affect the shape of the three-dimensional model of the robot, for example, the arm of the robot is in different positions, the shape of the three-dimensional model of the robot is also different, and for example, the robot may have parts such as a tray, and the tray generally extends out of other parts of the robot compared with other parts of the robot, so that a user can conveniently use the tray to contain articles, and the height of the tray and the extending length of the tray can affect the shape of the three-dimensional model of the obstacle.
Therefore, the electronic equipment can split the three-dimensional model of the robot according to the current pose information of the robot, and further obtain a plurality of sub-models. As shown in fig. 3, if the robot 30 has a tray 31, and the tray 31 is located at the position shown in fig. 3 at the current time, the electronic device may split the three-dimensional model of the robot according to the current posture of the robot, that is, the position of the tray 31 at the current time, to obtain a sub-model 311, a sub-model 312, and a sub-model 313.
In order to simplify the splitting and facilitate the subsequent comparison of the three-dimensional model, when the electronic equipment splits the three-dimensional model of the robot into a plurality of submodels according to the preset splitting rule, the electronic equipment can split the three-dimensional model of the robot into a plurality of geometric bodies with simple rules as the submodels, so that whether the robot can avoid the obstacle when the robot advances in the current posture can be determined more conveniently in the subsequent process.
Therefore, in the embodiment, the electronic device can split the three-dimensional robot model according to the current pose information of the robot to obtain a plurality of sub models, the splitting operation is simple and quick, the speed of subsequently comparing the plurality of sub models with the three-dimensional obstacle model can be increased, and the overall speed of obstacle avoidance of the robot is increased.
As an embodiment of the present invention, as shown in fig. 4, the step of comparing the plurality of submodels with the three-dimensional model of the obstacle to determine whether the robot can avoid the obstacle while traveling in the current posture may include:
s401, determining a plurality of obstacle avoidance planes according to the spatial position of the sub-model;
after the sub-models are obtained through splitting, the electronic equipment can determine a plurality of obstacle avoidance planes according to the space positions of the sub-models. Generally, the obstacle avoidance plane is a plane parallel to the traveling direction of the robot, so that it can be ensured that in the subsequent step, whether the robot can avoid the obstacle can be determined more accurately according to the projection area of each sub-model and the three-dimensional model of the obstacle in the obstacle avoidance plane. For example, the robot typically travels on the ground, then the obstacle avoidance plane may be a plane parallel to the ground.
Since the position, shape and size of each sub-model in space may be different, the electronic device may determine the position of the obstacle avoidance plane according to the spatial position of the sub-model, so as to facilitate the subsequent determination of the projection area corresponding to each sub-model.
In one example, as shown in fig. 5, a three-dimensional robot model of robot 50 is split into submodel 511, submodel 512, and submodel 513. Then, the electronic device can determine a plurality of obstacle avoidance planes, i.e., the obstacle avoidance plane 01 and the obstacle avoidance plane 02, according to the spatial positions of the sub-model 511, the sub-model 512, and the sub-model 513.
In another example, as shown in fig. 6, the three-dimensional robot model of the robot 60 is split into a submodel 611, a submodel 612, and a submodel 613. Then, the electronic device may determine a plurality of obstacle avoidance planes, i.e., the obstacle avoidance plane 03 and the obstacle avoidance plane 04, according to the spatial positions of the sub model 611, the sub model 612, and the sub model 613.
S402, respectively determining a projection area of each sub-model on a corresponding target obstacle avoidance plane;
the target obstacle avoidance plane corresponding to the sub-model is as follows: and in the preset direction, the obstacle avoidance plane closest to the sub-model. In general, the preset direction may be a direction perpendicular to an obstacle avoidance plane, a direction perpendicular to a traveling direction of the robot, and the like, for example, a direction perpendicular to the ground.
As shown in fig. 5, if the preset direction is a direction perpendicular to the obstacle avoidance plane, the target obstacle avoidance plane corresponding to the submodel 511 and the submodel 512 is 01, and the target obstacle avoidance plane corresponding to the submodel 513 is 02.
S403, determining a projection area of the barrier three-dimensional model on each obstacle avoidance plane;
the electronic equipment can also determine the projection area of the barrier three-dimensional model on each obstacle avoidance plane, so that the projection areas can be conveniently compared in the following process.
In an embodiment, if the obstacle avoidance plane penetrates through the three-dimensional obstacle model, the three-dimensional obstacle model may also be split into a plurality of sub-models, which are hereinafter referred to as obstacle sub-models for clarity of description, in order to determine a projection area corresponding to the three-dimensional obstacle model.
As shown in fig. 7, the obstacle avoidance plane 05 penetrates through the obstacle three-dimensional model, and at this time, the electronic device may split the obstacle three-dimensional model into an obstacle submodel 21 and an obstacle submodel 22. Furthermore, the projection area of the obstacle sub-model 21 on the obstacle avoidance plane 05 and the projection area of the obstacle sub-model 22 on the obstacle avoidance plane 06 can be determined.
S404, determining whether the projection area of the sub-model on each obstacle avoidance plane is overlapped with the projection area of the obstacle in the advancing direction of the robot, and if not, executing the step S405; if so, determining that the robot cannot avoid the obstacle when traveling at the current posture.
After determining the projection area corresponding to the sub-model and the projection area corresponding to the obstacle, the electronic device may determine whether the projection area of the sub-model on each obstacle avoidance plane overlaps with the projection area of the obstacle in the moving direction of the robot, and if not, it indicates that the robot moves in the current posture and does not collide with the obstacle, then step S405 may be executed.
If the projection area of the sub-model on a certain obstacle avoidance plane and the projection area of the obstacle overlap in the moving direction of the robot, it is indicated that the robot moves in the current posture and collides with the obstacle, and at this time, the electronic device can determine that the robot cannot avoid the obstacle when moving in the current posture, so as to perform actions such as moving direction adjustment.
Illustratively, as shown in fig. 8(a), the three-dimensional robot model includes two sub-models, one of which has a projection area 81 on a corresponding target obstacle avoidance plane, and the three-dimensional obstacle model has a projection area 82 on the target obstacle avoidance plane, and it can be seen that there is no overlap in the robot traveling direction 80. The projection area of the other sub-model on the corresponding target obstacle avoidance plane is 83, and the projection area of the three-dimensional model of the obstacle on the target obstacle avoidance plane is 84, so that it can be seen that there is no overlap in the robot traveling direction 80, which indicates that the robot travels in the current posture and does not collide with the obstacle.
As shown in fig. 8(b), the projection area of a sub-model included in the three-dimensional robot model on the corresponding target obstacle avoidance plane is 85, and the projection area of the three-dimensional obstacle avoidance plane is 86, and it can be seen that there is an overlap in the robot traveling direction 80, which indicates that the robot travels in the current posture and collides with the obstacle.
S405, determining that the robot can avoid the obstacle when traveling at the current posture.
The electronic equipment determines the projection area of the sub-model on each obstacle avoidance plane and the projection area of the obstacle, and when the robot does not overlap in the moving direction, the electronic equipment can determine that the robot can move in the current posture to avoid the obstacle.
Therefore, in the embodiment, the electronic device can determine the plurality of obstacle avoidance planes, and further judge whether the robot can avoid the obstacle when traveling in the current posture according to the projection area of each sub-model on the obstacle avoidance plane and the overlapping condition of the three-dimensional model of the obstacle in the projection area of each obstacle avoidance plane, so that the calculated amount is small, the judgment is accurate, the reaction speed of the robot can be further improved, and the user experience is improved.
As an embodiment of the present invention, the step of comparing the plurality of sub models with the three-dimensional model of the obstacle to determine whether the robot can avoid the obstacle when traveling in the current posture may include:
and comparing the coordinates of the target point of each sub model with the coordinates of the three-dimensional model of the obstacle, and determining whether the robot can avoid the obstacle when traveling in the current posture.
Wherein the target point of each submodel is used to represent the outer contour of the submodel. For example, if the sub-model is a cuboid, the target points may be eight vertices of the cuboid.
In this embodiment, the electronic device may determine whether the robot can avoid the obstacle while traveling in the current pose by comparing the coordinates of the target point of each sub model with the coordinates of the three-dimensional model of the obstacle. It is understood that the coordinates of some points in the outer contour of the three-dimensional model of the obstacle may represent the actual position and the approximate shape of the obstacle, and then the electronic device may determine whether the robot can avoid the obstacle by comparing the coordinates of the target point and the coordinates of some points in the outer contour of the three-dimensional model of the obstacle.
For example, a sub-model of the three-dimensional robot model is a rectangular parallelepiped, and coordinates of eight vertices are (26, 40, 15), (50, 50, 15), (26, 40, 30), (50, 50, 30), and (26, 50, 30), respectively. The three-dimensional model of the obstacle is a cylinder, the center of the bottom surface of the cylinder has coordinates (60, 40 and 56), the radius is 8, and the height is 20. The robot traveling direction is the Z-axis direction, and it is obvious that the obstacle can be avoided by the robot traveling in the current posture for this submodel.
By comparing the coordinates of the target point of each sub-model with the coordinates of the three-dimensional model of the obstacle, whether each sub-model can avoid the obstacle when the robot travels in the current posture can be determined, and whether the robot travels in the current posture and can avoid the obstacle can be further determined. It will be appreciated that all sub-models are able to avoid obstacles, so that the robot can avoid obstacles, and if one or more sub-models are unable to avoid obstacles, the robot is unable to avoid obstacles.
Therefore, in this embodiment, the electronic device can determine whether the robot can avoid the obstacle when traveling in the current posture through the coordinates of the target point of each sub-model and the coordinates of the three-dimensional model of the obstacle, and can conveniently and quickly determine whether the robot can avoid the obstacle, so that the calculation amount is small, and the processing speed is high.
In a case where it is determined that the robot cannot avoid the obstacle while traveling in the current posture, as an implementation of the embodiment of the present invention, the method may further include:
adjusting the posture of the robot; controlling the robot to travel at the adjusted pose.
Since the posture of the robot may affect the obstacle avoidance range of the robot, for example, the height and angle of the arm, the height of the tray, and the like, when it is determined that the robot cannot avoid the obstacle while traveling in the current posture, the electronic device may adjust the posture of the robot, so that the robot may avoid the obstacle, and then control the robot to travel in the adjusted posture.
As an implementation manner of the embodiment of the present invention, the step of adjusting the posture of the robot may include:
comparing a pre-stored pose model with the barrier three-dimensional model to determine a target pose model; and adjusting the posture of the robot to a posture corresponding to the target pose model.
In order to adjust the pose of the robot at any time to avoid obstacles, in one embodiment, the electronic device may pre-store a pose model. The pose model is a robot three-dimensional model corresponding to the robot in a certain pose. Because the postures of the robot may have several common or fixed postures, in order to further reduce the calculation amount and the memory space occupied by storing the posture models, the posture models pre-stored in the electronic device may be several common or fixed posture models corresponding to the postures.
For example, if the robot is a transfer robot with a tray, the tray has five fixed positions, and in the using process of the robot, the tray can be adjusted to one of the five positions according to actual needs, so that the electronic device can prestore pose models respectively corresponding to the tray located at the five positions.
Furthermore, the electronic device may compare a pre-stored pose model with the three-dimensional model of the obstacle to determine a target pose model, where the target pose model is: the robot travels in the corresponding posture to avoid the position model of the obstacle. That is, if the current pose of the robot is adjusted to the pose corresponding to the target pose model, the robot can avoid the obstacle. For example, the electronic device prestores five pose models, so the electronic device can compare the five pose models with the three-dimensional model of the obstacle, and determine a pose model that can avoid the obstacle when the robot travels in the corresponding pose, i.e., a target pose model.
For the specific way of comparing the pre-stored pose model with the three-dimensional model of the obstacle by the electronic device, any one of the above-mentioned ways of comparing the three-dimensional model of the robot with the three-dimensional model of the obstacle can be adopted, and details are not repeated here.
After the electronic equipment determines the target pose model, the pose of the robot can be adjusted to the pose corresponding to the target pose model, and then the robot is controlled to move in the adjusted pose so as to avoid the obstacle.
Therefore, in the embodiment, under the condition that the electronic device determines that the robot cannot avoid the obstacle when traveling in the current posture, the pre-stored posture model can be compared with the obstacle three-dimensional model to determine the target posture model, and then the posture of the robot is adjusted to the posture corresponding to the target posture model to avoid the obstacle, so that the robot can be adjusted quickly to avoid the obstacle, and the user experience is improved.
As an implementation manner of the embodiment of the present invention, the step of adjusting the posture of the robot may include:
calculating a target attitude according to the current attitude and the three-dimensional model of the obstacle; and adjusting the posture of the robot to the target posture.
Under the condition that the electronic equipment determines that the robot cannot avoid the obstacle when travelling in the current posture, the electronic equipment can also calculate a target posture according to the current posture and the obstacle three-dimensional model, wherein the target posture is as follows: when the robot travels in the current direction, the robot can avoid the attitude corresponding to the obstacle.
Specifically, the electronic device may determine a component that may collide with the obstacle according to the current posture and the three-dimensional model of the obstacle, and further, calculate in which posture the component is, when the robot travels in the current direction, the robot may avoid the obstacle, and thus obtain the target posture. Further, the electronic device may adjust the posture of the robot to the target posture, and then, may avoid the obstacle while controlling the robot to travel in the adjusted posture.
For example, at the current moment, the arm of the robot is located at a height of 40 cm relative to the ground, the electronic device determines that the robot cannot avoid the obstacle when traveling in the current posture, and then the electronic device calculates the target posture according to the current posture of the robot and the three-dimensional model of the obstacle as follows: the arm of the robot is located at a height of 60 cm relative to the ground, so that the electronic device can adjust the height of the arm of the robot to 60 cm, and the robot can be ensured to avoid the obstacle when traveling in the adjusted posture.
Therefore, in the embodiment, under the condition that the electronic device determines that the robot cannot avoid the obstacle when traveling in the current posture, the target posture can be calculated according to the current posture and the three-dimensional model of the obstacle, and then the posture of the robot is adjusted to the target posture to avoid the obstacle, so that the robot can be adjusted quickly to avoid the obstacle, and the user experience is improved.
Corresponding to the obstacle avoidance method of the robot, the embodiment of the invention also provides an obstacle avoidance device of the robot.
The following describes an obstacle avoidance device for a robot according to an embodiment of the present invention.
As shown in fig. 9, an obstacle avoidance apparatus for a robot includes:
an information obtaining module 910, configured to obtain structural information of the robot, current pose information, and obstacle information of an obstacle;
the model building module 920 is configured to build a three-dimensional robot model according to the structural information and the current pose information, and build a three-dimensional obstacle model according to the obstacle information;
a model splitting module 930, configured to split the three-dimensional robot model into a plurality of submodels according to a preset splitting rule;
an obstacle avoidance determining module 940, configured to compare the multiple sub models with the three-dimensional model of the obstacle, respectively, and determine whether the robot can avoid the obstacle when traveling in the current posture;
a travel control module 950 for controlling the robot to travel at the current posture when it is determined that the robot can avoid the obstacle while traveling at the current posture.
It can be seen that in the scheme provided by the embodiment of the present invention, first, structure information, current pose information, and obstacle information of a robot are obtained, then, a three-dimensional robot model is constructed according to the structure information and the current pose information, a three-dimensional obstacle model is constructed according to the obstacle information, the three-dimensional robot model is split into a plurality of sub models according to a preset splitting rule, then, the plurality of sub models are respectively compared with the three-dimensional obstacle model, it is determined whether the robot can avoid the obstacle when traveling in the current posture, and if so, the robot is controlled to travel in the current posture. The method of splitting the three-dimensional model of the robot into the plurality of sub-models is adopted, the point cloud coordinates of the obstacles do not need to be calculated, only the simple three-dimensional model of the obstacles needs to be constructed, the plurality of sub-models and the three-dimensional model of the obstacles are compared, the calculated amount is small, the reaction speed of the robot is greatly improved, and the user experience is good.
As an implementation manner of the embodiment of the present invention, the model splitting module 930 may include:
and a model splitting unit (not shown in fig. 9) configured to split the three-dimensional robot model according to the current pose information to obtain a plurality of sub models.
As an implementation manner of the embodiment of the present invention, the obstacle avoidance determining module 940 may include:
an obstacle avoidance plane determining unit (not shown in fig. 9) configured to determine a plurality of obstacle avoidance planes according to the spatial position of the sub-model;
a first projection area determining unit (not shown in fig. 9) configured to determine a projection area of each sub-model on a corresponding target obstacle avoidance plane, where the target obstacle avoidance plane corresponding to the sub-model is: in a preset direction, an obstacle avoidance plane closest to the sub-model;
a second projection area determining unit (not shown in fig. 9) for determining a projection area of the three-dimensional model of the obstacle on each obstacle avoidance plane;
an overlap area determination unit (not shown in fig. 9) for determining whether a projection area of the sub model on each obstacle avoidance plane overlaps with a projection area of the obstacle in a traveling direction of the robot;
a first obstacle avoidance determining unit (not shown in fig. 9) configured to determine, when a projection area of the sub model on each obstacle avoidance plane and a projection area of an obstacle do not overlap in a traveling direction of the robot, that the robot can avoid the obstacle while traveling in the current posture.
As an implementation manner of the embodiment of the present invention, the obstacle avoidance determining module 940 may include:
and a second obstacle avoidance determining unit (not shown in fig. 9) for comparing the coordinates of the target point of each sub-model with the coordinates of the three-dimensional model of the obstacle to determine whether the robot can avoid the obstacle when traveling in the current posture, wherein the target point of each sub-model is used for representing the outer contour of the sub-model.
As an implementation manner of the embodiment of the present invention, the apparatus may further include:
a pose adjustment module (not shown in fig. 9) for adjusting a pose of the robot in case it is determined that the robot cannot avoid the obstacle while traveling in the current pose;
an attitude control module (not shown in fig. 9) for controlling the robot to travel in the adjusted attitude.
As an implementation manner of the embodiment of the present invention, the posture adjustment module may include:
a pose determining unit (not shown in fig. 9) for comparing a pre-stored pose model with the three-dimensional model of the obstacle to determine a target pose model, wherein the target pose model is: the robot travels in the corresponding posture to avoid the pose model of the obstacle;
and a first posture adjusting unit (not shown in fig. 9) for adjusting the posture of the robot to the posture corresponding to the target posture model.
As an implementation manner of the embodiment of the present invention, the posture adjustment module may include:
a target posture determining unit (not shown in fig. 9) for calculating a target posture according to the current posture and the three-dimensional model of the obstacle, wherein the target posture is: the robot can avoid the attitude corresponding to the obstacle when traveling in the current direction;
a second posture adjustment unit (not shown in fig. 9) for adjusting the posture of the robot to the target posture.
The embodiment of the present invention further provides a robot, as shown in fig. 10, including a processor 1001, a communication interface 1002, a memory 1003 and a communication bus 1004, wherein the processor 1001, the communication interface 1002 and the memory 1003 complete mutual communication through the communication bus 1004,
a memory 1003 for storing a computer program;
the processor 1001 is configured to implement the following steps when executing the program stored in the memory 1003:
acquiring structural information, current pose information and obstacle information of the robot;
constructing a three-dimensional robot model according to the structural information and the current pose information, and constructing an obstacle three-dimensional model according to the obstacle information;
splitting the three-dimensional robot model into a plurality of sub-models according to a preset splitting rule;
respectively comparing the plurality of sub models with the three-dimensional model of the obstacle, and determining whether the robot can avoid the obstacle when traveling at the current posture;
and if so, controlling the robot to travel at the current posture.
It can be seen that in the scheme provided by the embodiment of the present invention, the robot first obtains the structure information, the current pose information, and the obstacle information of the robot, then constructs a three-dimensional robot model according to the structure information and the current pose information, constructs a three-dimensional obstacle model according to the obstacle information, splits the three-dimensional robot model into a plurality of submodels according to a preset splitting rule, then compares the plurality of submodels with the three-dimensional obstacle model, determines whether the robot can avoid the obstacle when traveling in the current posture, and if so, controls the robot to travel in the current posture. The method of splitting the three-dimensional model of the robot into the plurality of sub-models is adopted, the point cloud coordinates of the obstacles do not need to be calculated, only the simple three-dimensional model of the obstacles needs to be constructed, the plurality of sub-models and the three-dimensional model of the obstacles are compared, the calculated amount is small, the reaction speed of the robot is greatly improved, and the user experience is good.
The communication bus mentioned in the robot may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the robot and other equipment.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
The splitting the three-dimensional robot model into a plurality of submodels according to the preset splitting rule may include:
and splitting the three-dimensional model of the robot according to the current pose information to obtain a plurality of sub-models.
The step of comparing the plurality of sub models with the three-dimensional model of the obstacle to determine whether the robot can avoid the obstacle when traveling in the current posture may include:
determining a plurality of obstacle avoidance planes according to the spatial position of the sub-model;
respectively determining a projection area of each sub-model on a corresponding target obstacle avoidance plane, wherein the target obstacle avoidance plane corresponding to the sub-model is as follows: in a preset direction, an obstacle avoidance plane closest to the sub-model;
determining a projection area of the barrier three-dimensional model on each obstacle avoidance plane;
determining whether a projection area of the sub-model on each obstacle avoidance plane and a projection area of an obstacle overlap in a traveling direction of the robot;
if not, determining that the robot can avoid the obstacle when traveling at the current posture.
The step of comparing the plurality of sub models with the three-dimensional model of the obstacle to determine whether the robot can avoid the obstacle when traveling in the current posture may include:
and comparing the coordinates of the target point of each submodel with the coordinates of the three-dimensional model of the obstacle to determine whether the robot can avoid the obstacle when travelling in the current posture, wherein the target point of each submodel is used for representing the outer contour of the submodel.
Wherein, in a case where it is determined that the robot cannot avoid the obstacle while traveling in the current posture, the method may further include:
adjusting the posture of the robot;
controlling the robot to travel at the adjusted pose.
Wherein the step of adjusting the posture of the robot may include:
comparing a pre-stored pose model with the barrier three-dimensional model to determine a target pose model, wherein the target pose model is as follows: the robot travels in the corresponding posture to avoid the pose model of the obstacle;
and adjusting the posture of the robot to a posture corresponding to the target pose model.
Wherein the step of adjusting the posture of the robot may include:
calculating a target posture according to the current posture and the three-dimensional model of the obstacle, wherein the target posture is as follows: the robot can avoid the attitude corresponding to the obstacle when traveling in the current direction;
and adjusting the posture of the robot to the target posture.
An embodiment of the present invention further provides a computer-readable storage medium, in which a computer program is stored, and when executed by a processor, the computer program implements the following steps:
acquiring structural information, current pose information and obstacle information of the robot;
constructing a three-dimensional robot model according to the structural information and the current pose information, and constructing an obstacle three-dimensional model according to the obstacle information;
splitting the three-dimensional robot model into a plurality of sub-models according to a preset splitting rule;
respectively comparing the plurality of sub models with the three-dimensional model of the obstacle, and determining whether the robot can avoid the obstacle when traveling at the current posture;
and if so, controlling the robot to travel at the current posture.
It can be seen that, in the solution provided in the embodiment of the present invention, when the computer program is executed by the processor, first, the structure information, the current pose information, and the obstacle information of the robot are obtained, then, the robot three-dimensional model is constructed according to the structure information and the current pose information, the obstacle three-dimensional model is constructed according to the obstacle information, according to the preset splitting rule, the robot three-dimensional model is split into a plurality of sub-models, and then, the plurality of sub-models are respectively compared with the obstacle three-dimensional model to determine whether the robot can avoid the obstacle when traveling in the current posture, and if so, the robot is controlled to travel in the current posture. The method of splitting the three-dimensional model of the robot into the plurality of sub-models is adopted, the point cloud coordinates of the obstacles do not need to be calculated, only the simple three-dimensional model of the obstacles needs to be constructed, the plurality of sub-models and the three-dimensional model of the obstacles are compared, the calculated amount is small, the reaction speed of the robot is greatly improved, and the user experience is good.
The splitting the three-dimensional robot model into a plurality of submodels according to the preset splitting rule may include:
and splitting the three-dimensional model of the robot according to the current pose information to obtain a plurality of sub-models.
The step of comparing the plurality of sub models with the three-dimensional model of the obstacle to determine whether the robot can avoid the obstacle when traveling in the current posture may include:
determining a plurality of obstacle avoidance planes according to the spatial position of the sub-model;
respectively determining a projection area of each sub-model on a corresponding target obstacle avoidance plane, wherein the target obstacle avoidance plane corresponding to the sub-model is as follows: in a preset direction, an obstacle avoidance plane closest to the sub-model;
determining a projection area of the barrier three-dimensional model on each obstacle avoidance plane;
determining whether a projection area of the sub-model on each obstacle avoidance plane and a projection area of an obstacle overlap in a traveling direction of the robot;
if not, determining that the robot can avoid the obstacle when traveling at the current posture.
The step of comparing the plurality of sub models with the three-dimensional model of the obstacle to determine whether the robot can avoid the obstacle when traveling in the current posture may include:
and comparing the coordinates of the target point of each submodel with the coordinates of the three-dimensional model of the obstacle to determine whether the robot can avoid the obstacle when travelling in the current posture, wherein the target point of each submodel is used for representing the outer contour of the submodel.
Wherein, in a case where it is determined that the robot cannot avoid the obstacle while traveling in the current posture, the method may further include:
adjusting the posture of the robot;
controlling the robot to travel at the adjusted pose.
Wherein the step of adjusting the posture of the robot may include:
comparing a pre-stored pose model with the barrier three-dimensional model to determine a target pose model, wherein the target pose model is as follows: the robot travels in the corresponding posture to avoid the pose model of the obstacle;
and adjusting the posture of the robot to a posture corresponding to the target pose model.
Wherein the step of adjusting the posture of the robot may include:
calculating a target posture according to the current posture and the three-dimensional model of the obstacle, wherein the target posture is as follows: the robot can avoid the attitude corresponding to the obstacle when traveling in the current direction;
and adjusting the posture of the robot to the target posture.
It should be noted that, for the embodiments of the apparatus, the robot, and the computer-readable storage medium, since they are substantially similar to the embodiments of the method, the description is simple, and the relevant points can be referred to the partial description of the embodiments of the method.
It is further noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (16)

1. An obstacle avoidance method for a robot, the method comprising:
acquiring structural information, current pose information and obstacle information of the robot;
constructing a three-dimensional robot model according to the structural information and the current pose information, and constructing an obstacle three-dimensional model according to the obstacle information;
splitting the three-dimensional robot model into a plurality of sub-models according to a preset splitting rule;
respectively comparing the plurality of sub models with the three-dimensional model of the obstacle, and determining whether the robot can avoid the obstacle when traveling at the current posture;
and if so, controlling the robot to travel at the current posture.
2. The method of claim 1, wherein the step of splitting the three-dimensional model of the robot into a plurality of sub-models according to a preset splitting rule comprises:
and splitting the three-dimensional model of the robot according to the current pose information to obtain a plurality of sub-models.
3. The method of claim 1, wherein the step of comparing the plurality of sub-models with the three-dimensional model of the obstacle, respectively, to determine whether the robot is able to avoid the obstacle while traveling in the current pose comprises:
determining a plurality of obstacle avoidance planes according to the spatial position of the sub-model;
respectively determining a projection area of each sub-model on a corresponding target obstacle avoidance plane, wherein the target obstacle avoidance plane corresponding to the sub-model is as follows: in a preset direction, an obstacle avoidance plane closest to the sub-model;
determining a projection area of the barrier three-dimensional model on each obstacle avoidance plane;
determining whether a projection area of the sub-model on each obstacle avoidance plane and a projection area of an obstacle overlap in a traveling direction of the robot;
if not, determining that the robot can avoid the obstacle when traveling at the current posture.
4. The method of claim 1, wherein the step of comparing the plurality of sub-models with the three-dimensional model of the obstacle, respectively, to determine whether the robot is able to avoid the obstacle while traveling in the current pose comprises:
and comparing the coordinates of the target point of each submodel with the coordinates of the three-dimensional model of the obstacle to determine whether the robot can avoid the obstacle when travelling in the current posture, wherein the target point of each submodel is used for representing the outer contour of the submodel.
5. The method of any one of claims 1-4, wherein, in the event that it is determined that the robot cannot avoid the obstacle while traveling in the current pose, the method further comprises:
adjusting the posture of the robot;
controlling the robot to travel at the adjusted pose.
6. The method of claim 5, wherein the step of adjusting the pose of the robot comprises:
comparing a pre-stored pose model with the barrier three-dimensional model to determine a target pose model, wherein the target pose model is as follows: the robot travels in the corresponding posture to avoid the pose model of the obstacle;
and adjusting the posture of the robot to a posture corresponding to the target pose model.
7. The method of claim 5, wherein the step of adjusting the pose of the robot comprises:
calculating a target posture according to the current posture and the three-dimensional model of the obstacle, wherein the target posture is as follows: the robot can avoid the attitude corresponding to the obstacle when traveling in the current direction;
and adjusting the posture of the robot to the target posture.
8. An obstacle avoidance apparatus for a robot, the apparatus comprising:
the information acquisition module is used for acquiring the structural information, the current pose information and the obstacle information of the robot;
the model building module is used for building a three-dimensional robot model according to the structural information and the current pose information and building a three-dimensional obstacle model according to the obstacle information;
the model splitting module is used for splitting the three-dimensional robot model into a plurality of sub models according to a preset splitting rule;
the obstacle avoidance determining module is used for respectively comparing the plurality of sub models with the obstacle three-dimensional model and determining whether the robot can avoid the obstacle when travelling in the current posture;
and the traveling control module is used for controlling the robot to travel at the current posture when the robot is determined to travel at the current posture to avoid the obstacle.
9. The apparatus of claim 8, wherein the model splitting module comprises:
and the model splitting unit is used for splitting the three-dimensional model of the robot according to the current pose information to obtain a plurality of sub models.
10. The apparatus of claim 8, wherein the obstacle avoidance determination module comprises:
the obstacle avoidance plane determining unit is used for determining a plurality of obstacle avoidance planes according to the space position of the sub model;
a first projection area determining unit, configured to determine a projection area of each sub-model on a corresponding target obstacle avoidance plane, where the target obstacle avoidance plane corresponding to the sub-model is: in a preset direction, an obstacle avoidance plane closest to the sub-model;
the second projection area determining unit is used for determining the projection area of the obstacle three-dimensional model on each obstacle avoidance plane;
an overlap area determination unit for determining whether a projection area of the sub-model on each obstacle avoidance plane overlaps with a projection area of the obstacle in the traveling direction of the robot;
and the first obstacle avoidance determining unit is used for determining that the robot can avoid the obstacle when the robot travels in the current posture when the projection area of the sub model on each obstacle avoidance plane is not overlapped with the projection area of the obstacle.
11. The apparatus of claim 8, wherein the obstacle avoidance determination module comprises:
and the second obstacle avoidance determining unit is used for comparing the coordinates of the target point of each sub-model with the coordinates of the three-dimensional obstacle model and determining whether the robot can avoid the obstacle when travelling in the current posture, wherein the target point of each sub-model is used for representing the outer contour of the sub-model.
12. The apparatus of any one of claims 8-11, wherein the apparatus further comprises:
the attitude adjusting module is used for adjusting the attitude of the robot under the condition that the robot is determined to travel at the current attitude and can not avoid the obstacle;
and the attitude control module is used for controlling the robot to advance in the adjusted attitude.
13. The apparatus of claim 12, wherein the pose adjustment module comprises:
a pose determining unit, configured to compare a pre-stored pose model with the three-dimensional model of the obstacle, and determine a target pose model, where the target pose model is: the robot travels in the corresponding posture to avoid the pose model of the obstacle;
and the first posture adjusting unit is used for adjusting the posture of the robot to the posture corresponding to the target posture model.
14. The apparatus of claim 12, wherein the pose adjustment module comprises:
a target posture determining unit, configured to calculate a target posture according to the current posture and the three-dimensional model of the obstacle, where the target posture is: the robot can avoid the attitude corresponding to the obstacle when traveling in the current direction;
and the second posture adjusting unit is used for adjusting the posture of the robot to the target posture.
15. A robot is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing mutual communication by the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any of claims 1 to 7 when executing a program stored in the memory.
16. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method steps of any one of claims 1 to 7.
CN201710912218.8A 2017-09-29 2017-09-29 Obstacle avoidance method and device for robot and robot Active CN108733065B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710912218.8A CN108733065B (en) 2017-09-29 2017-09-29 Obstacle avoidance method and device for robot and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710912218.8A CN108733065B (en) 2017-09-29 2017-09-29 Obstacle avoidance method and device for robot and robot

Publications (2)

Publication Number Publication Date
CN108733065A CN108733065A (en) 2018-11-02
CN108733065B true CN108733065B (en) 2021-06-04

Family

ID=63940179

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710912218.8A Active CN108733065B (en) 2017-09-29 2017-09-29 Obstacle avoidance method and device for robot and robot

Country Status (1)

Country Link
CN (1) CN108733065B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110502014A (en) * 2019-08-22 2019-11-26 深圳乐动机器人有限公司 A kind of method and robot of robot obstacle-avoiding
JP6927597B2 (en) * 2019-08-30 2021-09-01 Necプラットフォームズ株式会社 Delivery devices, flying objects, flight systems, their methods and programs
CN112008729A (en) * 2020-09-01 2020-12-01 云南电网有限责任公司电力科学研究院 Collision detection method for overhead line maintenance mechanical arm
CN112578795A (en) * 2020-12-15 2021-03-30 深圳市优必选科技股份有限公司 Robot obstacle avoidance method and device, robot and storage medium
CN114742960A (en) * 2021-02-08 2022-07-12 追觅创新科技(苏州)有限公司 Target object avoiding method and device, storage medium and electronic device
CN113359754A (en) * 2021-06-25 2021-09-07 深圳市海柔创新科技有限公司 Obstacle avoidance method, obstacle avoidance device, electronic device, and storage medium
CN113721618A (en) * 2021-08-30 2021-11-30 中科新松有限公司 Plane determination method, device, equipment and storage medium

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1617170A (en) * 2003-09-19 2005-05-18 索尼株式会社 Environment identification device and method, route design device and method and robot
CN1883887A (en) * 2006-07-07 2006-12-27 中国科学院力学研究所 Robot obstacle-avoiding route planning method based on virtual scene
CN101804625A (en) * 2009-02-18 2010-08-18 索尼公司 Robot device and control method thereof and computer program
CN102152308A (en) * 2010-02-10 2011-08-17 库卡实验仪器有限公司 Method for a collision-free path planning of an industrial robot
CN103329182A (en) * 2010-11-08 2013-09-25 Cmte发展有限公司 A collision avoidance system and method for human commanded systems
CN104097205A (en) * 2013-04-07 2014-10-15 同济大学 Task space based self-collision avoidance control method for real-time movements of robot
CN104156520A (en) * 2014-07-31 2014-11-19 哈尔滨工程大学 Linear projection based convex-polyhedron collision detection method
CN104331081A (en) * 2014-10-10 2015-02-04 北京理工大学 Gait planning method for walking of biped robot along slope
CN104677347A (en) * 2013-11-27 2015-06-03 哈尔滨恒誉名翔科技有限公司 Indoor mobile robot capable of producing 3D navigation map based on Kinect
CN104850699A (en) * 2015-05-19 2015-08-19 天津市天锻压力机有限公司 Anti-collision control method of transfer robots of stamping production line
CN104933708A (en) * 2015-06-07 2015-09-23 浙江大学 Barrier detection method in vegetation environment based on multispectral and 3D feature fusion
CN105512377A (en) * 2015-11-30 2016-04-20 腾讯科技(深圳)有限公司 Real time virtual scene cylinder collider and convex body collision detection method and system
EP3064964A1 (en) * 2015-03-04 2016-09-07 Agco Corporation Path planning based on obstruction mapping
CN106780539A (en) * 2016-11-30 2017-05-31 航天科工智能机器人有限责任公司 Robot vision tracking
CN106845412A (en) * 2017-01-20 2017-06-13 百度在线网络技术(北京)有限公司 Obstacle recognition method and device, computer equipment and computer-readable recording medium
CN106949893A (en) * 2017-03-24 2017-07-14 华中科技大学 The Indoor Robot air navigation aid and system of a kind of three-dimensional avoidance

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006239844A (en) * 2005-03-04 2006-09-14 Sony Corp Obstacle avoiding device, obstacle avoiding method, obstacle avoiding program and mobile robot device
CN101436073A (en) * 2008-12-03 2009-05-20 江南大学 Wheeled mobile robot trace tracking method based on quantum behavior particle cluster algorithm
US9652857B2 (en) * 2011-07-01 2017-05-16 Nec Corporation Object detection apparatus detection method and program
KR101247761B1 (en) * 2011-07-15 2013-04-01 삼성중공업 주식회사 Method for finding the movement area of a mobile robot on hull surface, a mobile robot, and recording medium
KR102096398B1 (en) * 2013-07-03 2020-04-03 삼성전자주식회사 Method for recognizing position of autonomous mobile robot
CN105437232B (en) * 2016-01-11 2017-07-04 湖南拓视觉信息技术有限公司 A kind of method and device of control multi-joint Mobile Robot Obstacle Avoidance
CN106708084B (en) * 2016-11-24 2019-08-02 中国科学院自动化研究所 The automatic detection of obstacles of unmanned plane and barrier-avoiding method under complex environment
CN106643701B (en) * 2017-01-16 2019-05-14 深圳优地科技有限公司 A kind of mutual detection method and device of robot
CN106845416B (en) * 2017-01-20 2021-09-21 百度在线网络技术(北京)有限公司 Obstacle identification method and device, computer equipment and readable medium
CN106951847B (en) * 2017-03-13 2020-09-29 百度在线网络技术(北京)有限公司 Obstacle detection method, apparatus, device and storage medium

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1617170A (en) * 2003-09-19 2005-05-18 索尼株式会社 Environment identification device and method, route design device and method and robot
CN1883887A (en) * 2006-07-07 2006-12-27 中国科学院力学研究所 Robot obstacle-avoiding route planning method based on virtual scene
CN101804625A (en) * 2009-02-18 2010-08-18 索尼公司 Robot device and control method thereof and computer program
CN102152308A (en) * 2010-02-10 2011-08-17 库卡实验仪器有限公司 Method for a collision-free path planning of an industrial robot
CN103329182A (en) * 2010-11-08 2013-09-25 Cmte发展有限公司 A collision avoidance system and method for human commanded systems
CN104097205A (en) * 2013-04-07 2014-10-15 同济大学 Task space based self-collision avoidance control method for real-time movements of robot
CN104677347A (en) * 2013-11-27 2015-06-03 哈尔滨恒誉名翔科技有限公司 Indoor mobile robot capable of producing 3D navigation map based on Kinect
CN104156520A (en) * 2014-07-31 2014-11-19 哈尔滨工程大学 Linear projection based convex-polyhedron collision detection method
CN104331081A (en) * 2014-10-10 2015-02-04 北京理工大学 Gait planning method for walking of biped robot along slope
EP3064964A1 (en) * 2015-03-04 2016-09-07 Agco Corporation Path planning based on obstruction mapping
CN104850699A (en) * 2015-05-19 2015-08-19 天津市天锻压力机有限公司 Anti-collision control method of transfer robots of stamping production line
CN104933708A (en) * 2015-06-07 2015-09-23 浙江大学 Barrier detection method in vegetation environment based on multispectral and 3D feature fusion
CN105512377A (en) * 2015-11-30 2016-04-20 腾讯科技(深圳)有限公司 Real time virtual scene cylinder collider and convex body collision detection method and system
CN106780539A (en) * 2016-11-30 2017-05-31 航天科工智能机器人有限责任公司 Robot vision tracking
CN106845412A (en) * 2017-01-20 2017-06-13 百度在线网络技术(北京)有限公司 Obstacle recognition method and device, computer equipment and computer-readable recording medium
CN106949893A (en) * 2017-03-24 2017-07-14 华中科技大学 The Indoor Robot air navigation aid and system of a kind of three-dimensional avoidance

Also Published As

Publication number Publication date
CN108733065A (en) 2018-11-02

Similar Documents

Publication Publication Date Title
CN108733065B (en) Obstacle avoidance method and device for robot and robot
CN109634304B (en) Unmanned aerial vehicle flight path planning method and device and storage medium
JP6031554B2 (en) Obstacle detection method and apparatus based on monocular camera
KR102170928B1 (en) Robot obstacle avoidance control system, method, robot and storage medium
WO2020134082A1 (en) Path planning method and apparatus, and mobile device
CN108733045B (en) Robot, obstacle avoidance method thereof and computer-readable storage medium
KR20220013565A (en) Detection method, device, electronic device and storage medium
CN111006676B (en) Map construction method, device and system
WO2022078467A1 (en) Automatic robot recharging method and apparatus, and robot and storage medium
JP2016515254A5 (en)
CN107843252B (en) Navigation path optimization method and device and electronic equipment
CN111805535B (en) Positioning navigation method, device and computer storage medium
JP2022548743A (en) Obstacle information sensing method and device for mobile robot
CN113246143A (en) Mechanical arm dynamic obstacle avoidance trajectory planning method and device
CN110705385B (en) Method, device, equipment and medium for detecting angle of obstacle
CN110000793A (en) A kind of motion planning and robot control method, apparatus, storage medium and robot
Shih et al. Optimal design and placement of omni-cameras in binocular vision systems for accurate 3-D data measurement
CN113607166B (en) Indoor and outdoor positioning method and device for autonomous mobile robot based on multi-sensor fusion
CN111168685A (en) Robot control method, robot, and readable storage medium
US10248131B2 (en) Moving object controller, landmark, and moving object control method
CN112220405A (en) Self-moving tool cleaning route updating method, device, computer equipment and medium
CN111207754A (en) Particle filter-based multi-robot formation positioning method and robot equipment
CN110928296A (en) Method for avoiding charging seat by robot and robot thereof
WO2022166397A1 (en) Method and apparatus for avoiding target object, storage medium and electronic apparatus
CN117193278A (en) Method, apparatus, computer device and storage medium for dynamic edge path generation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant