CN115993830B - Path planning method and device based on obstacle avoidance and robot - Google Patents

Path planning method and device based on obstacle avoidance and robot Download PDF

Info

Publication number
CN115993830B
CN115993830B CN202310278233.7A CN202310278233A CN115993830B CN 115993830 B CN115993830 B CN 115993830B CN 202310278233 A CN202310278233 A CN 202310278233A CN 115993830 B CN115993830 B CN 115993830B
Authority
CN
China
Prior art keywords
obstacle
path
robot
point
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310278233.7A
Other languages
Chinese (zh)
Other versions
CN115993830A (en
Inventor
刘思文
皮康
梁瑞豪
陈伟波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Foshan Longshen Robot Co Ltd
Original Assignee
Foshan Longshen Robot Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Foshan Longshen Robot Co Ltd filed Critical Foshan Longshen Robot Co Ltd
Priority to CN202310278233.7A priority Critical patent/CN115993830B/en
Publication of CN115993830A publication Critical patent/CN115993830A/en
Application granted granted Critical
Publication of CN115993830B publication Critical patent/CN115993830B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The application is applicable to the technical field of robots, and provides a path planning method and device based on obstacle avoidance and a robot, wherein the method is applicable to the robot and comprises the steps of acquiring obstacle information of a first obstacle detected by the robot on a first path, wherein the first path comprises at least one target path point; determining a first obstacle region according to the obstacle information, wherein the first obstacle region is used for describing a motion region estimated by the first obstacle; and determining a second path according to the first obstacle region and the target path point, and controlling the robot to walk according to the second path, wherein the second path is used for indicating the robot to bypass the first obstacle region. The obstacle avoidance efficiency of the robot can be improved.

Description

Path planning method and device based on obstacle avoidance and robot
Technical Field
The application relates to the technical field of robots, in particular to a path planning method and device based on obstacle avoidance and a robot.
Background
The current robot navigation scheme can enable the robot to avoid simple static obstacles, but when the robot encounters dynamic obstacles, the robot usually chooses to directly wait in situ or turn around to run for a certain distance, and the problem of low obstacle avoidance efficiency exists and needs to be further improved.
Disclosure of Invention
Based on this, in order to solve the problem of low obstacle avoidance efficiency in the prior art, the embodiment of the application provides a path planning method, a device and a robot based on obstacle avoidance, and the specific technical scheme is as follows:
in a first aspect, an embodiment of the present application provides a path planning method based on obstacle avoidance, where the method includes:
acquiring obstacle information of a first obstacle detected by the robot on a first path, wherein the first path comprises at least one target path point;
determining a first obstacle region according to the obstacle information, wherein the first obstacle region is used for describing a motion region estimated by the first obstacle;
and determining a second path according to the first obstacle region and the target path point, and controlling the robot to walk according to the second path, wherein the second path is used for indicating the robot to bypass the first obstacle region.
According to the path planning method provided by the embodiment of the application, the terminal equipment firstly determines the barrier information of the first barrier, then determines the first barrier region of the first barrier, then determines the second path according to the first barrier region and the target path point, and finally controls the robot to walk according to the second path, so that the robot can walk continuously without directly waiting in situ or turning around for a certain distance on the premise of encountering a dynamic barrier, and the barrier avoiding efficiency is greatly improved.
In a second aspect, an embodiment of the present application provides a path planning apparatus based on obstacle avoidance, where the apparatus includes:
obstacle information acquisition module: the method comprises the steps of obtaining obstacle information of a first obstacle detected by the robot on a first path, wherein the first path comprises at least one target path point;
an obstacle region determination module: the obstacle information is used for determining a first obstacle area according to the obstacle information, and the first obstacle area is used for describing a motion area estimated by the first obstacle;
the path determining module: and the robot walking control device is used for determining a second path according to the first obstacle region and the target path point, controlling the robot to walk according to the second path and indicating the robot to bypass the first obstacle region.
In a third aspect, embodiments of the present application provide a terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the method according to the first aspect as described above when the computer program is executed.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the method of the first aspect described above.
It will be appreciated that the advantages of the second to fourth aspects may be found in the relevant description of the first aspect and are not repeated here.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the description of the embodiments or the prior art will be briefly described below.
Fig. 1 is a flow chart of a path planning method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a target path point in a path planning method according to an embodiment of the present disclosure;
fig. 3 is a flowchart of step S300 in a path planning method according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a first obstacle region in a path planning method according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a target boundary point in a path planning method according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of a first obstacle avoidance path point in a path planning method according to an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of a second path in a path planning method according to an embodiment of the present disclosure;
fig. 8 is a flowchart illustrating step S341 in the path planning method according to an embodiment of the present application;
fig. 9 is a flowchart illustrating a step S343 in the path planning method according to an embodiment of the present application;
FIG. 10 is a schematic diagram of a third obstacle avoidance path point in the path planning method according to an embodiment of the present disclosure;
FIG. 11 is a schematic diagram of a second obstacle avoidance path point in a path planning method according to an embodiment of the present disclosure;
FIG. 12 is a block diagram of a path planning apparatus according to an embodiment of the present application;
fig. 13 is a schematic view of a robot according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system configurations, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
In the description of this application and the claims that follow, the terms "first," "second," "third," etc. are used merely to distinguish between descriptions and should not be construed to indicate or imply relative importance.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
In order to illustrate the technical solutions described in the present application, the following description is made by specific examples.
Referring to fig. 1, fig. 1 is a flow chart of a path planning method based on obstacle avoidance according to an embodiment of the present application. In this embodiment, the execution body of the path planning method is a terminal device. It will be appreciated that the terminal device may be a robot, or may be a mobile phone, a tablet computer, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a personal digital assistant (personal digital assistant, PDA), etc. that are installed on the robot, and the specific type of the terminal device is not limited in this embodiment of the present application.
Referring to fig. 1, the path planning method provided in the embodiment of the present application is applicable to a robot, and includes, but is not limited to, the following steps:
in S100, obstacle information of a first obstacle detected by the robot on a first path is acquired.
For example, before the robot performs the path planning method, the robot may first walk according to the first path; the first path includes a start point, at least one target path point, and an end point; the first path can be a default path pre-stored in the terminal equipment, and can also be an initial path established by field personnel aiming at the current map; for easy understanding of the technical solution in the embodiments of the present application, please refer to fig. 2, in which a circle with "start" in fig. 2 indicates a starting point; the circle with "now" in fig. 2 represents the current waypoint of the robot, i.e. the waypoint of the current position of the robot; the circle with "no" in fig. 2 represents the target waypoint, i.e. the next waypoint of the robot; the blank circles in fig. 2 indicate the path points to be executed, that is, the path points to be executed after the robot is located at the target path point, and when the robot reaches the last path point of the execution path point, the execution path point is the target path point; the circle with "end" in fig. 2 represents the termination point; the dashed line in fig. 2 indicates the execution sequence of the first paths, that is, from the start point to the end point, it should be noted that the number of to-be-executed path points in fig. 2 is merely an example and is 2, the first paths in fig. 2 are merely straight lines, the specific number of to-be-executed path points and the shape of the first paths are not limited, and in other possible implementations, the number of target path points may be any number, and the shape of the first paths may be any shape.
Specifically, when the robot walks according to the first path, the terminal device can detect whether a first obstacle exists in the peripheral range of the robot through a pre-installed ultrasonic sensor, and the first obstacle can be a moving dog or a continuously spraying nozzle; if the terminal equipment detects a first obstacle, acquiring obstacle information of the first obstacle detected by the robot; in some possible implementations, the obstacle information may include both size information, speed information, and third position information of the first obstacle, where the size information represents a size parameter of the first obstacle, the speed information represents a speed parameter of the first obstacle, and the third position information represents a position of the first obstacle in the current map.
In S200, a first obstacle region is determined from the obstacle information.
Specifically, the first obstacle region represents a predicted movement region of the first obstacle, such as a predicted movement range for a moving dog or a spraying range of a spray head for continuously spraying; since the first obstacle may not appear or move when the first path is established, if the robot performs the first path in a form of crossing the first obstacle region, there is a case in which the first obstacle affects the safety of the robot; the terminal device may determine, according to the acquired obstacle information, a first obstacle region of the first obstacle, so as to improve association between the first obstacle region and the first obstacle.
In S300, a second path is determined according to the first obstacle region and the target path point, and the robot is controlled to walk according to the second path.
Specifically, the second path is a walking path capable of indicating the robot to bypass the first obstacle region; the terminal equipment can determine the second path according to the first obstacle area and the target path point, and then control the robot to walk according to the second path, so that the robot can efficiently bypass the first obstacle area instead of directly waiting in situ or turning around to run for a certain distance, and the obstacle avoidance efficiency of the robot is improved.
In some possible implementations, to provide a path for the robot to avoid the obstacle with high efficiency, referring to fig. 3, step S300 includes, but is not limited to, the following steps:
in S310, it is searched whether at least one target path point exists in the first obstacle region.
Specifically, after the terminal device determines the first obstacle region, the terminal device may compare the coordinates corresponding to the target path point with each coordinate in the coordinate set corresponding to the first obstacle region; for any one target waypoint: and if the coordinates corresponding to the target path point are in the coordinate set corresponding to the first obstacle region, determining that the target path point is positioned in the first obstacle region. In another possible implementation, referring to fig. 4, a trapezoid box with "first obstacle" in fig. 4 represents the first obstacle, a white arrow in fig. 4 represents a movement direction of the first obstacle, and a region enclosed by a dash-dot line in fig. 4 represents a first obstacle region; the terminal equipment can also respectively compare the coordinates corresponding to the target path points and the coordinate sets corresponding to the path point sets to be executed with the coordinate sets corresponding to the first obstacle region; for any one target path point and any one path point to be executed: if the coordinates corresponding to the target path point and/or the to-be-executed path point are in the coordinates set corresponding to the first obstacle region, determining that the target path point and/or the to-be-executed path point are located in the first obstacle region, so that potential safety hazards of the target path point in the first obstacle region are thoroughly eliminated, and the safety of the robot during walking is improved.
In S320, if there is at least one target waypoint in the first obstacle region, then for each target waypoint in the first obstacle region: first position information of a target path point is acquired.
Specifically, after searching for the existence of the target path points in the first obstacle region, the terminal device may acquire first location information of each target path point in the first obstacle region, where the first location information indicates coordinates of the target path points, so as to pad a subsequent construction of the second path.
In S330, a boundary point having the shortest distance from the target route point is selected as a target boundary point from the boundary point set corresponding to the first obstacle region based on the first position information.
For example, referring to fig. 5, circles filled with black in fig. 5 each represent a boundary point in the boundary point set corresponding to the first obstacle region, and circles with "sides" in fig. 5 represent target boundary points selected by the terminal device; the terminal device may first obtain a coordinate set of a boundary point set corresponding to the first obstacle region, then designate coordinates corresponding to any one target path point as reference coordinates, calculate distance values between each boundary point in the boundary point set and the target path point, and select a minimum value in all the distance values as a target value, where the boundary point corresponding to the minimum value is the target boundary point, thereby improving the value of the target boundary point.
In some possible implementations, because the number of boundary points corresponding to the first obstacle region is greater, the terminal device may further search for a peripheral region of the target path point by using coordinates corresponding to the target path points located in the first obstacle region as a search center through a preset step length, if no boundary point is searched according to the preset step length, increase the step length until the boundary point is searched, and set the boundary point as the target boundary point, thereby improving efficiency of selecting the target boundary point.
In S340, a first obstacle avoidance path point is determined according to a preset first distance value and a target boundary point.
For example, referring to fig. 6, the dashed line in fig. 6 represents a schematic line with the target boundary point as the center point and the first distance value as the radius, and the circle with "first obstacle avoidance" in fig. 6 represents the first obstacle avoidance path point; the terminal equipment can determine a first obstacle avoidance path point by taking the target boundary point as a reference point under the limitation of a first distance value, and the first obstacle avoidance path point can only keep the path point which is positioned outside the first obstacle region and has the closest distance with the first obstacle region; when the robot is positioned at the first obstacle avoidance path point, the first obstacle is difficult to influence the walking of the robot, so that the robot can avoid the obstacle efficiently; in practical applications, the specific value of the first distance value may be combined with the three-dimensional size parameter and/or the maximum motion range of the robot.
In S350, a second path is determined in combination with the first path and the first obstacle avoidance path point.
For example, referring to fig. 7, a circle with "a" in fig. 7 represents a first obstacle avoidance path point, and a solid line with an arrow in fig. 7 represents a second path; the terminal equipment can combine the current path point in the first path, a plurality of first obstacle avoidance path points and path points to be executed in the first path and outside the first obstacle area to construct a second path; when the robot executes the second path, the obstacle avoidance efficiency of the robot can be improved.
In some possible implementations, to further improve the obstacle avoidance effect of the robot, the robot may be pre-installed with a camera, which may be a high-precision camera; referring to fig. 8, after step S340, the method further includes, but is not limited to, the following steps:
in S341, a first image taken by the camera for a first obstacle is acquired.
Specifically, the terminal device may control the robot to orient a lens of a preset high-precision camera toward the first obstacle, and then take a high-precision image of the first obstacle, i.e., a first image, by the high-precision camera.
In S342, the first pixel point corresponding to the first image is classified by the classifier to determine whether the first obstacle belongs to a dangerous class.
Specifically, because the randomness of the first obstacles is larger, the difference between different first obstacles is larger, so the terminal equipment can train the classifier in advance through the training set, the training set can comprise a plurality of groups of categories, such as a dog and a nozzle which are divided into dangerous categories, after the classifier is trained, the terminal equipment can classify each pixel point corresponding to the high-precision image through the classifier to determine whether the first image in the high-precision image belongs to the dangerous category, thereby judging whether the first obstacle is in the dangerous category or not, and facilitating the follow-up taking of different countermeasures. In another possible implementation manner, the terminal device may further accurately detect the first obstacle in the high-precision image through the target detection technology, and then classify the pixel point corresponding to the first obstacle, so as to reduce the magnitude of data and improve the processing performance.
In S343, if the first obstacle belongs to the dangerous category, the second obstacle avoidance path point is determined according to the first obstacle avoidance path point and the preset second distance value.
Specifically, the second distance value represents a custom safe distance value; if the first obstacle belongs to the dangerous category, the terminal equipment can determine the second obstacle avoidance path point based on the first obstacle avoidance path point and the second distance value, so that the distance between the robot walking according to the given path and the first obstacle area is increased on the premise that the robot directly waits or walks around for a certain distance in situ without encountering the dynamic first obstacle, and the obstacle avoidance effect of the robot is improved.
Accordingly, step S350 includes, but is not limited to, the following steps:
s351, if the first obstacle belongs to the dangerous category, determining a third path by combining the first path and the second obstacle avoidance path point.
Specifically, if the first obstacle belongs to the dangerous category, the terminal device may construct a third path in combination with the current path point in the first path, the plurality of second obstacle avoidance path points, and the path point to be executed in the first path and located outside the first obstacle region; when the robot walks according to the third robot, the probability that the robot is affected by the first obstacle can be greatly reduced.
In some possible implementations, to improve the security of the third path, referring to fig. 9, step S343 includes, but is not limited to, the following steps:
in S3431, if the first obstacle belongs to the dangerous category, the first obstacle avoidance path point is taken as a center, and the second distance value is taken as a step length to generate a third obstacle avoidance path point group.
For example, referring to fig. 10, a dashed line in fig. 10 represents a schematic line with the first obstacle avoidance path point as a center point and the second distance value as a radius, and a plurality of circles with "third obstacle avoidance" in fig. 10 represent a third obstacle avoidance path point group, where the third obstacle avoidance path point group includes three third obstacle avoidance path points, i.e. circles with "third obstacle avoidance" in fig. 10; it should be noted that the positions of the three third obstacle avoidance path points are all located outside the first obstacle region. In other possible implementations, the third obstacle avoidance waypoints in the third obstacle avoidance waypoint group may be arranged at equal intervals, thereby providing more selectable options for selection of the second obstacle avoidance waypoint.
In S3432, for each third obstacle avoidance path point: and obtaining a third distance value of a third obstacle avoidance path point.
Specifically, the third distance value represents a distance value of a boundary of the third obstacle avoidance path point corresponding to the first obstacle region; the terminal equipment can respectively obtain respective third distance values of the three third obstacle avoidance path points through the coordinates corresponding to the third obstacle avoidance path points and the coordinate set corresponding to the boundary points of the first obstacle region.
In S3433, a third obstacle avoidance path point with the longest third distance value in the third obstacle avoidance path point group is selected as the second obstacle avoidance path point.
Specifically, referring to fig. 11, after comparing the third distance values corresponding to the third obstacle avoidance path points, the terminal device may select the third obstacle avoidance path point with the longest third distance value as the second obstacle avoidance path point, so as to implement selection of the second obstacle avoidance path point.
In some possible implementations, to improve reproducibility of the second path, after step S300, the method further includes, but is not limited to, the steps of:
in S400, the obstacle information of the first obstacle and the second path are uploaded to the cloud server.
Specifically, the terminal device can upload the obstacle information of the first obstacle and the second path to the cloud server, so that a user can view the obstacle information of the first obstacle in the background, which is favorable for researching possible abnormal conditions of the current map, and meanwhile, is favorable for improving the obstacle avoidance efficiency of the terminal device when the same path is executed by the same map.
In some possible implementations, to facilitate the terminal device to determine the first obstacle region, step S100 includes, but is not limited to, the following steps:
s1001 imports the size information, the speed information, and the third position information into a preset obstacle region calculation function to determine the first obstacle region.
Specifically, the terminal device may import all of the size information, the speed information, and the third position information into a preset obstacle region calculation function, thereby determining the first obstacle region; in some possible implementations, the obstacle region computing function may be specifically:
Figure SMS_1
in the method, in the process of the invention,
Figure SMS_10
representing an estimated obstacle region of the first obstacle; />
Figure SMS_5
Speed information representing a first obstacle; />
Figure SMS_7
Current speed information representing the robot; />
Figure SMS_13
Representing a speed-related coefficient of the first obstacle and the robot, when the speed of the first obstacle is greater than twice the current speed of the robot,/for the robot>
Figure SMS_19
A positive integer greater than 5 may be taken; />
Figure SMS_17
The speed function of the boundary where the first obstacle reaches the obstacle area is calculated, and the speed function is specifically:
Figure SMS_18
;/>
Figure SMS_8
representing a reference speed threshold, which may be a preset fixed value; />
Figure SMS_9
Size information representing a first obstacle; />
Figure SMS_2
The calculation formula for the size change amount when the first obstacle reaches the boundary corresponding to the obstacle region is as follows: />
Figure SMS_6
,/>
Figure SMS_11
Representing the amount of change in length when the first obstacle reaches the boundary corresponding to the obstacle region, ++>
Figure SMS_12
Representing the variation of the width when the first obstacle reaches the boundary corresponding to the obstacle area, ++>
Figure SMS_14
Indicating the amount of change in height when the first obstacle reaches the boundary corresponding to the obstacle region, ++>
Figure SMS_16
Representing a time variation; this reference amount of dimensional change is particularly important when the first obstacle has a scattering like spray; />
Figure SMS_3
Third position information indicating the first obstacle; />
Figure SMS_15
Representing a predicted position of the first obstacle determined based on the speed information; />
Figure SMS_4
Representing a transfer function that determines a predicted position based on the third position.
The conversion function specifically comprises the following steps:
Figure SMS_20
Figure SMS_21
representing spatial coordinates corresponding to third position information of the first obstacle;
Figure SMS_22
and representing the spatial coordinates corresponding to the predicted position of the first obstacle.
The implementation principle of the path planning method based on obstacle avoidance in the embodiment of the application is as follows: the terminal equipment firstly acquires barrier information strongly related to the first barrier, then determines a barrier area of the first barrier according to the barrier information, then generates an obstacle avoidance path point based on the path point in the barrier area, and then generates a second path for indicating the robot to bypass the barrier area according to the obstacle avoidance path point, and controls the robot to walk according to the second path so as to avoid the first barrier, thereby reducing the situation that the robot directly waits in situ or turns around to run for a distance on the premise that the robot encounters a dynamic barrier, and improving the obstacle avoidance efficiency.
It should be noted that, the sequence number of each step in the above embodiment does not mean the sequence of execution sequence, and the execution sequence of each process should be determined by its function and internal logic, and should not constitute any limitation on the implementation process of the embodiment of the present application.
The embodiment of the present application further provides a path planning device based on obstacle avoidance, which is suitable for a robot, and for convenience of description, only a portion relevant to the present application is shown, as shown in fig. 12, the device 12 includes:
obstacle information acquisition module 120: the method comprises the steps of acquiring obstacle information of a first obstacle detected by a robot on a first path, wherein the first path comprises at least one target path point;
the obstacle region determination module 121: the method comprises the steps of determining a first obstacle area according to obstacle information, wherein the first obstacle area is used for describing a motion area estimated by a first obstacle;
the path determination module 122: and the robot is used for determining a second path according to the first obstacle region and the target path point, controlling the robot to walk according to the second path and indicating the robot to bypass the first obstacle region.
Optionally, the path determining module 122 includes:
searching a sub-module: searching for whether at least one target path point exists within the first obstacle region;
the first location information determination submodule: for each target waypoint within the first obstacle region if there is at least one target waypoint within the first obstacle region: acquiring first position information of a target path point;
target boundary point determination submodule: the method comprises the steps of selecting a boundary point with the shortest distance from a boundary point set corresponding to a first obstacle region according to first position information, wherein the boundary point is the shortest distance from a target path point;
the first obstacle avoidance path point determination submodule: the method comprises the steps of determining a first obstacle avoidance path point according to a preset first distance value and a target boundary point;
the path determination submodule: the method comprises the steps of determining a second path by combining a first path and a first obstacle avoidance path point.
Optionally, a camera is mounted on the robot, and the apparatus 12 further includes:
an image acquisition module: acquiring a first image shot by a camera aiming at a first obstacle;
a first obstacle classification module: the first pixel points corresponding to the first image are classified through the classifier, so that whether the first obstacle belongs to a dangerous class or not is determined;
the second obstacle avoidance path point determining module: if the first obstacle belongs to the dangerous category, determining a second obstacle avoidance path point according to the first obstacle avoidance path point and a preset second distance value;
and a third path determining module: and determining a third path by combining the first path and the second obstacle avoidance path point if the first obstacle belongs to the dangerous class.
Optionally, the second obstacle avoidance path point determining module includes:
generating a sub-module of obstacle avoidance path point group: if the first obstacle belongs to the dangerous class, a third obstacle avoidance path point group is generated by taking the first obstacle avoidance path point as the center and taking the second distance value as the step length, wherein the third obstacle avoidance path point group comprises a plurality of third obstacle avoidance path points, and the positions of the third obstacle avoidance path points are positioned outside the first obstacle area;
third distance value determination submodule: for each third obstacle avoidance waypoint: acquiring a third distance value of a third obstacle avoidance path point, wherein the third distance value is used for describing a distance value of a boundary of the third obstacle avoidance path point corresponding to the first obstacle region;
the second obstacle avoidance path point determination submodule: and the third obstacle avoidance path point with the longest third distance value in the third obstacle avoidance path point group is used as a second obstacle avoidance path point.
Optionally, the device 12 further comprises:
and an uploading module: and uploading the obstacle information of the first obstacle and the second path to the cloud server.
Optionally, when the obstacle information includes size information, speed information, and third position information of the first obstacle, the obstacle information acquiring module 120 includes:
the first obstacle region determination submodule: the method comprises the steps of importing size information, speed information and third position information into a preset obstacle region calculation function to determine a first obstacle region, wherein the obstacle region calculation function specifically comprises the following steps:
Figure SMS_23
in the method, in the process of the invention,
Figure SMS_24
an obstacle region estimated for the first obstacle; />
Figure SMS_28
A speed correlation coefficient between the first obstacle and the robot; />
Figure SMS_31
Is the first obstacleSpeed information of the object; />
Figure SMS_26
The current speed information of the robot; />
Figure SMS_27
Calculating a speed function of the first obstacle reaching the boundary corresponding to the obstacle region; />
Figure SMS_30
Is a reference speed threshold; />
Figure SMS_32
Size information of the first obstacle; />
Figure SMS_25
The size change amount when the first obstacle reaches the boundary corresponding to the obstacle area; />
Figure SMS_29
Third position information for the first obstacle;
Figure SMS_33
a predicted position for the first obstacle determined based on the speed information; />
Figure SMS_34
A transfer function for determining a predicted position based on the third position.
It should be noted that, because the content of information interaction and execution process between the modules is based on the same concept as the method embodiment of the present application, specific functions and technical effects thereof may be referred to in the method embodiment section, and details are not repeated herein.
The embodiment of the present application further provides a robot, as shown in fig. 13, the robot 13 of the embodiment includes: a processor 131, a memory 132, and a computer program 133 stored in the memory 132 and executable on the processor 131. The steps in the above-described flow processing method embodiment, such as steps S100 to S300 shown in fig. 1, are implemented when the processor 131 executes the computer program 133; alternatively, the processor 131, when executing the computer program 133, performs the functions of the modules in the apparatus described above, such as the functions of the modules 120 to 122 shown in fig. 12.
The robot 13 may be a computing device such as a desktop computer, a notebook computer, a palm top computer, a cloud server, etc., and the robot 13 includes, but is not limited to, a processor 131 and a memory 132. It will be appreciated by those skilled in the art that fig. 13 is merely an example of a robot 13 and is not meant to be limiting of the robot 13, and may include more or fewer components than shown, or may combine certain components, or different components, e.g., the robot 13 may also include input and output devices, network access devices, buses, etc.
The processor 131 may be a central processing unit (Central Processing Unit, CPU), other general purpose processor, digital signal processor (Digital Signal Processor, DSP), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, etc.; a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 132 may be an internal storage unit of the robot 13, such as a hard disk or a memory of the robot 13, or the memory 132 may be an external storage device of the robot 13, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card) or the like provided on the robot 13; further, the memory 132 may also include both an internal memory unit and an external memory device of the robot 13, the memory 132 may also store a computer program 133 and other programs and data required for the robot 13, and the memory 132 may also be used to temporarily store data that has been output or is to be output.
An embodiment of the present application also provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the steps of the various method embodiments described above. Wherein the computer program comprises computer program code, the computer program code can be in the form of source code, object code, executable file or some intermediate form, etc.; the computer readable medium may include: any entity or device capable of carrying computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth.
The foregoing are all preferred embodiments of the present application, and are not intended to limit the scope of the present application in any way, therefore: all equivalent changes of the method, principle and structure of the present application should be covered in the protection scope of the present application.

Claims (8)

1. A path planning method based on obstacle avoidance, which is applicable to a robot, and is characterized in that the method comprises the following steps:
acquiring obstacle information of a first obstacle detected by the robot on a first path, wherein the first path comprises at least one target path point;
determining a first obstacle region according to the obstacle information, wherein the first obstacle region is used for describing a motion region estimated by the first obstacle;
determining a second path according to the first obstacle region and the target path point, and controlling the robot to walk according to the second path, wherein the second path is used for indicating the robot to bypass the first obstacle region;
wherein the obstacle information includes size information, speed information, and third position information of the first obstacle, and the determining the first obstacle region according to the obstacle information includes:
the method comprises the steps of importing size information, speed information and third position information into a preset obstacle region calculation function, and determining a first obstacle region, wherein the obstacle region calculation function specifically comprises the following steps:
Figure QLYQS_1
in the method, in the process of the invention,
Figure QLYQS_2
an obstacle region estimated for the first obstacle; />
Figure QLYQS_5
A speed correlation coefficient between the first obstacle and the robot; />
Figure QLYQS_7
Speed information for the first obstacle; />
Figure QLYQS_4
Current speed information of the robot; />
Figure QLYQS_6
Calculating a speed function of the first obstacle reaching a boundary corresponding to the obstacle region; />
Figure QLYQS_9
Is a reference speed threshold; />
Figure QLYQS_11
Size information for the first obstacle; />
Figure QLYQS_3
The size change amount is the size change amount when the first obstacle reaches the boundary corresponding to the obstacle area;
Figure QLYQS_8
third position information for the first obstacle; />
Figure QLYQS_10
Predicted bits for the first obstacle determined based on the speed informationPlacing; />
Figure QLYQS_12
Determining a transfer function for the predicted location based on the third location;
the determining a second path according to the first obstacle region and the target path point comprises:
searching whether at least one target path point exists in the first obstacle region;
if at least one target waypoint exists in the first obstacle region, for each of the target waypoints in the first obstacle region: acquiring first position information of the target path point;
selecting a boundary point with the shortest distance between the boundary points and the target path point from a boundary point set corresponding to the first obstacle region according to the first position information;
determining a first obstacle avoidance path point according to a preset first distance value and the target boundary point;
and determining a second path by combining the first path and the first obstacle avoidance path point.
2. The method of claim 1, wherein the robot is equipped with a camera, and wherein after the determining a first obstacle avoidance path point based on the preset first distance value and the target boundary point, the method further comprises:
acquiring a first image shot by the camera aiming at a first obstacle;
classifying a first pixel point corresponding to the first image through a classifier to determine whether the first obstacle belongs to a dangerous class;
if the first obstacle belongs to the dangerous category, determining a second obstacle avoidance path point according to the first obstacle avoidance path point and a preset second distance value;
accordingly, the determining a second path in combination with the first path and the first obstacle avoidance path point includes:
and if the first obstacle belongs to the dangerous category, determining a third path by combining the first path and the second obstacle avoidance path point.
3. The method of claim 2, wherein if the first obstacle belongs to a hazard class, determining the second obstacle avoidance waypoint according to the first obstacle avoidance waypoint and a preset second distance value comprises:
if the first obstacle belongs to the dangerous class, a third obstacle avoidance path point group is generated by taking the first obstacle avoidance path point as a center and taking the second distance value as a step length, wherein the third obstacle avoidance path point group comprises a plurality of third obstacle avoidance path points, and the positions of the third obstacle avoidance path points are located outside the first obstacle area;
for each of the third obstacle avoidance waypoints:
acquiring a third distance value of the third obstacle avoidance path point, wherein the third distance value is used for describing a distance value of a boundary of the third obstacle avoidance path point corresponding to the first obstacle region;
and selecting a third obstacle avoidance path point with the longest third distance value in the third obstacle avoidance path point group as a second obstacle avoidance path point.
4. The method of claim 1, wherein after the determining a second path from the first obstacle region and the target path point and controlling the robot to walk along the second path, the method further comprises:
uploading the obstacle information of the first obstacle and the second path to a cloud server.
5. A path planning device based on obstacle avoidance, suitable for a robot, the device comprising:
obstacle information acquisition module: the method comprises the steps of obtaining obstacle information of a first obstacle detected by the robot on a first path, wherein the first path comprises at least one target path point;
an obstacle region determination module: the obstacle information is used for determining a first obstacle area according to the obstacle information, and the first obstacle area is used for describing a motion area estimated by the first obstacle;
the path determining module: the robot walking control device is used for determining a second path according to the first obstacle area and the target path point, and controlling the robot to walk according to the second path, wherein the second path is used for indicating the robot to bypass the first obstacle area;
wherein the obstacle information includes size information, speed information, and third position information of the first obstacle, and the obstacle region determining module includes:
the first obstacle region determination submodule: the method comprises the steps of importing size information, speed information and third position information into a preset obstacle region calculation function, and determining a first obstacle region, wherein the obstacle region calculation function specifically comprises the following steps:
Figure QLYQS_14
in the method, in the process of the invention,
Figure QLYQS_16
an obstacle region estimated for the first obstacle; />
Figure QLYQS_19
A speed correlation coefficient between the first obstacle and the robot; />
Figure QLYQS_20
Speed information for the first obstacle; />
Figure QLYQS_15
Current speed information of the robot; />
Figure QLYQS_18
To calculate a velocity function of the first obstacle reaching the boundary corresponding to the obstacle region;/>
Figure QLYQS_21
Is a reference speed threshold; />
Figure QLYQS_23
Size information for the first obstacle; />
Figure QLYQS_17
The size change amount is the size change amount when the first obstacle reaches the boundary corresponding to the obstacle area;
Figure QLYQS_22
third position information for the first obstacle; />
Figure QLYQS_24
A predicted position for the first obstacle determined based on the speed information; />
Figure QLYQS_25
Determining a transfer function for the predicted location based on the third location; />
The path determination module includes:
searching a sub-module: searching for the presence or absence of at least one of the target waypoints within the first obstacle region;
the first location information determination submodule: for, if at least one of the target waypoints is present in the first obstacle region, for each of the target waypoints in the first obstacle region: acquiring first position information of the target path point;
target boundary point determination submodule: the boundary point with the shortest distance with the target path point is selected from the boundary point set corresponding to the first obstacle region as a target boundary point according to the first position information;
the first obstacle avoidance path point determination submodule: the method comprises the steps of determining a first obstacle avoidance path point according to a preset first distance value and the target boundary point;
the path determination submodule: and the first path is used for combining the first path and the first obstacle avoidance path point to determine a second path.
6. The apparatus of claim 5, wherein the apparatus further comprises:
and an uploading module: and uploading the obstacle information of the first obstacle and the second path to a cloud server.
7. A robot comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 4 when the computer program is executed.
8. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method according to any one of claims 1 to 4.
CN202310278233.7A 2023-03-21 2023-03-21 Path planning method and device based on obstacle avoidance and robot Active CN115993830B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310278233.7A CN115993830B (en) 2023-03-21 2023-03-21 Path planning method and device based on obstacle avoidance and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310278233.7A CN115993830B (en) 2023-03-21 2023-03-21 Path planning method and device based on obstacle avoidance and robot

Publications (2)

Publication Number Publication Date
CN115993830A CN115993830A (en) 2023-04-21
CN115993830B true CN115993830B (en) 2023-06-06

Family

ID=85995256

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310278233.7A Active CN115993830B (en) 2023-03-21 2023-03-21 Path planning method and device based on obstacle avoidance and robot

Country Status (1)

Country Link
CN (1) CN115993830B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117130368A (en) * 2023-09-20 2023-11-28 未岚大陆(北京)科技有限公司 Control method and mowing robot

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2932306A1 (en) * 2008-06-10 2009-12-11 Thales Sa METHOD AND DEVICE FOR AIDING NAVIGATION FOR AN AIRCRAFT WITH RESPECT TO OBSTACLES
CN107729878A (en) * 2017-11-14 2018-02-23 智车优行科技(北京)有限公司 Obstacle detection method and device, equipment, vehicle, program and storage medium
WO2019199027A1 (en) * 2018-04-09 2019-10-17 엘지전자 주식회사 Robot cleaner
WO2021189863A1 (en) * 2020-03-27 2021-09-30 珠海格力电器股份有限公司 Cleaning robot operation control method, apparatus, and system, and computer readable storage medium

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5248388B2 (en) * 2009-03-26 2013-07-31 株式会社東芝 Obstacle risk calculation device, method and program
KR101825687B1 (en) * 2015-04-24 2018-02-05 한국전자통신연구원 The obstacle detection appratus and method using difference image
US9821801B2 (en) * 2015-06-29 2017-11-21 Mitsubishi Electric Research Laboratories, Inc. System and method for controlling semi-autonomous vehicles
CN109709945B (en) * 2017-10-26 2022-04-15 深圳市优必选科技有限公司 Path planning method and device based on obstacle classification and robot
CN108227706B (en) * 2017-12-20 2021-10-19 北京理工华汇智能科技有限公司 Method and device for avoiding dynamic obstacle for robot
CN109990782A (en) * 2017-12-29 2019-07-09 北京欣奕华科技有限公司 A kind of method and apparatus of avoiding obstacles
JP2020004095A (en) * 2018-06-28 2020-01-09 株式会社Soken Autonomous mobile body controller and autonomous mobile body
GB2577915B (en) * 2018-10-10 2021-06-16 Dyson Technology Ltd Path planning
CN112306049B (en) * 2019-07-15 2024-02-23 苏州宝时得电动工具有限公司 Autonomous robot, obstacle avoidance method and device thereof, and storage medium
CN110488839A (en) * 2019-08-30 2019-11-22 长安大学 A kind of legged type robot paths planning method and device based on tangent line interior extrapolation method
CN111067440A (en) * 2019-12-31 2020-04-28 深圳飞科机器人有限公司 Cleaning robot control method and cleaning robot
CN114136318A (en) * 2020-08-13 2022-03-04 科沃斯商用机器人有限公司 Intelligent navigation method and device for machine
US11940800B2 (en) * 2021-04-23 2024-03-26 Irobot Corporation Navigational control of autonomous cleaning robots
CN114839993A (en) * 2022-05-12 2022-08-02 北京凯拉斯信息技术有限公司 Unmanned vehicle line patrol obstacle avoidance method, device, equipment and storage medium
CN115185271B (en) * 2022-06-29 2023-05-23 禾多科技(北京)有限公司 Navigation path generation method, device, electronic equipment and computer readable medium
CN115268463A (en) * 2022-08-23 2022-11-01 广州小鹏自动驾驶科技有限公司 Obstacle avoidance path planning method, vehicle and storage medium
CN115359248A (en) * 2022-09-06 2022-11-18 山东聚祥机械股份有限公司 Robot navigation obstacle avoidance method and system based on meta-learning
CN115755912A (en) * 2022-11-22 2023-03-07 未岚大陆(北京)科技有限公司 Obstacle avoidance control method, device, equipment and storage medium
CN115629612A (en) * 2022-12-19 2023-01-20 科大讯飞股份有限公司 Obstacle avoidance method, device, equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2932306A1 (en) * 2008-06-10 2009-12-11 Thales Sa METHOD AND DEVICE FOR AIDING NAVIGATION FOR AN AIRCRAFT WITH RESPECT TO OBSTACLES
CN107729878A (en) * 2017-11-14 2018-02-23 智车优行科技(北京)有限公司 Obstacle detection method and device, equipment, vehicle, program and storage medium
WO2019199027A1 (en) * 2018-04-09 2019-10-17 엘지전자 주식회사 Robot cleaner
WO2021189863A1 (en) * 2020-03-27 2021-09-30 珠海格力电器股份有限公司 Cleaning robot operation control method, apparatus, and system, and computer readable storage medium

Also Published As

Publication number Publication date
CN115993830A (en) 2023-04-21

Similar Documents

Publication Publication Date Title
US11161246B2 (en) Robot path planning method and apparatus and robot using the same
WO2021115081A1 (en) Three-dimensional object detection and intelligent driving
US10380890B2 (en) Autonomous vehicle localization based on walsh kernel projection technique
US11250288B2 (en) Information processing apparatus and information processing method using correlation between attributes
WO2021142799A1 (en) Path selection method and path selection device
US10565721B2 (en) Information processing device and information processing method for specifying target point of an object
CN110874102B (en) Virtual safety protection area protection system and method for mobile robot
JP7147420B2 (en) OBJECT DETECTION DEVICE, OBJECT DETECTION METHOD AND COMPUTER PROGRAM FOR OBJECT DETECTION
KR20170106963A (en) Object detection using location data and scale space representations of image data
CN115993830B (en) Path planning method and device based on obstacle avoidance and robot
US11204610B2 (en) Information processing apparatus, vehicle, and information processing method using correlation between attributes
CN112060079B (en) Robot and collision detection method and device thereof
Dey et al. VESPA: A framework for optimizing heterogeneous sensor placement and orientation for autonomous vehicles
CN113110521A (en) Mobile robot path planning control method, control device thereof and storage medium
CN112581613A (en) Grid map generation method and system, electronic device and storage medium
CN112686951A (en) Method, device, terminal and storage medium for determining robot position
CN113593035A (en) Motion control decision generation method and device, electronic equipment and storage medium
US20220350342A1 (en) Moving target following method, robot and computer-readable storage medium
CN116931583B (en) Method, device, equipment and storage medium for determining and avoiding moving object
US20230266763A1 (en) Method for navigating root through limited space, robot and computer-readable storage medium
CN116400709A (en) Robot track determining method and device, robot and storage medium
CN114489050A (en) Obstacle avoidance route control method, device, equipment and storage medium for straight line driving
CN117677862A (en) Pseudo image point identification method, terminal equipment and computer readable storage medium
CN116382308B (en) Intelligent mobile machinery autonomous path finding and obstacle avoiding method, device, equipment and medium
US20230101162A1 (en) Mobile body control device, mobile body, mobile body control method, program, and learning device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant