CN116540690A - Robot navigation method, device, robot and storage medium - Google Patents

Robot navigation method, device, robot and storage medium Download PDF

Info

Publication number
CN116540690A
CN116540690A CN202210095188.7A CN202210095188A CN116540690A CN 116540690 A CN116540690 A CN 116540690A CN 202210095188 A CN202210095188 A CN 202210095188A CN 116540690 A CN116540690 A CN 116540690A
Authority
CN
China
Prior art keywords
robot
channel
obstacle
point cloud
cloud data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210095188.7A
Other languages
Chinese (zh)
Inventor
朱卓
薄慕婷
杨咚浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dreame Innovation Technology Suzhou Co Ltd
Original Assignee
Dreame Innovation Technology Suzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dreame Innovation Technology Suzhou Co Ltd filed Critical Dreame Innovation Technology Suzhou Co Ltd
Priority to CN202210095188.7A priority Critical patent/CN116540690A/en
Priority to PCT/CN2022/137568 priority patent/WO2023142710A1/en
Publication of CN116540690A publication Critical patent/CN116540690A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The embodiment of the application discloses a robot navigation method, a robot navigation device, a robot and a storage medium. The method comprises the steps of obtaining obstacle information in the running process of the robot; determining the channel type of the current travel channel of the robot according to the obstacle information; and acquiring a navigation strategy corresponding to the channel type of the current travelling channel and controlling the robot to travel based on the navigation strategy. The method and the device can enable the robot to rapidly identify the current channel type, help the robot to get rid of poverty according to the navigation strategy corresponding to the channel type, and are high in adaptability, and the whole process is faster and more efficient.

Description

Robot navigation method, device, robot and storage medium
Technical Field
The invention belongs to the technical field of robots, and particularly relates to a robot navigation method, a device, a robot and a storage medium.
Background
Along with the gradual improvement of the living standard of people, the intelligent household appliance is widely applied, and the sweeping robot is a relatively common household sanitary appliance. The sweeping robot usually encounters various obstacles, such as common sweeping obstacles like a table and a chair, stairs and the like, and the sweeping robot can well distinguish the obstacles. However, due to the wide variety of homework areas, other scenes may be encountered besides those of tables and chairs, stairs, etc., such as: the V/U/S channel scene composed of different furniture such as a closestool, a bath glass door, a closestool, a washing table and the like is easy to be trapped by the sweeping robot or misjudgment occurs because of no control algorithm aiming at the scene, and the user experience is affected.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a robot navigation method, a device, a robot and a storage medium. The method can enable the robot to rapidly identify the current channel type, and help the robot to get rid of the trouble according to the navigation strategy corresponding to the channel type.
The specific technical scheme provided by the embodiment of the invention is as follows:
in a first aspect, a method for robot navigation is provided, the method comprising:
acquiring obstacle information in the running process of the robot;
determining the channel type of the current running channel of the robot according to the obstacle information;
and acquiring a navigation strategy corresponding to the channel type of the current travelling channel and controlling the robot to travel based on the navigation strategy.
In some embodiments, the obstacle information includes point cloud data of an obstacle; after obtaining the point cloud data of the obstacle, the method further comprises:
judging whether the point cloud data of the obstacle belong to two sides of the robot body or not;
the determining the channel type of the current travel channel of the robot according to the obstacle information specifically comprises the following steps:
when the point cloud data of the obstacles at the two sides of the robot body are acquired, determining the channel type of the current travelling channel according to the distribution condition of the point cloud data of the obstacles at the two sides of the robot body;
the channel type at least comprises any one of an I-type channel, a V-type channel, an arc-shaped channel and a polygonal channel.
In some embodiments, the determining the channel type of the current travel channel of the robot according to the obstacle information specifically further includes:
detecting whether collision position data exist in real time when the point cloud data of the obstacle at any side of the robot body do not exist; the collision position data are recorded in real time after the robot collides with the obstacle;
determining a shape of the obstacle on the collision side based on the collision position data when the collision position data is detected;
the channel type is determined based on the shape of the obstacle on the collision side and the distribution of the point cloud data of the obstacle on the other side.
In some embodiments, the determining the channel type of the current travel channel of the robot according to the obstacle information specifically further includes:
detecting whether edge position data exist in real time when point cloud data of an obstacle on any side of the robot body do not exist; the edge position data are acquired and processed in real time by a sensor arranged at the bottom of the robot;
when the edge position data is detected, the channel type is determined based on the distribution of the edge position data and the point cloud data of the obstacle on the other side.
In some embodiments, the controlling the robot travel based on the navigation strategy corresponding to the channel type of the current travel channel specifically includes:
if the channel type of the current travel channel of the robot is a V-shaped channel, controlling the robot to leave the current travel channel;
if the channel type of the current travel channel of the robot is any one of an I-type channel, an arc-shaped channel and a polygonal channel, judging whether a leaving condition is met, and controlling the robot to leave the current travel channel when the leaving condition is confirmed to be met.
In some embodiments, the determining that the departure condition is satisfied comprises: determining that the front of the robot is dead; the method specifically comprises the following steps:
detecting whether the channel distance in front of the robot advancing channel is smaller than the width of the robot body;
when the robot is smaller than the preset value, determining that the front of the robot is dead; or alternatively, the first and second heat exchangers may be,
detecting whether a cliff or a virtual forbidden zone exists in front of the robot;
if cliffs or virtual forbidden areas exist, the robot is determined to be dead in front of the travel.
In some embodiments, the controlling the robot to leave the current travel path specifically includes:
judging whether the robot can rotate or not;
when the robot cannot rotate, acquiring point cloud data of a safety side behind the robot;
controlling the robot to retreat from the current travelling channel based on the point cloud data of the safety side;
wherein the safety side includes any one of a boundary of an obstacle, a boundary of a marked virtual exclusion zone, and a boundary of a cliff determined based on edge position data.
In some embodiments, the method further comprises:
when a plurality of safety sides are detected, acquiring priority orders of all the safety sides;
the control robot backing off the current travel channel based on the point cloud data of the safety side specifically comprises:
and controlling the robot to back off the current travelling channel based on the point cloud data of the safety side with the highest priority based on the priority sequence of the safety side.
In some embodiments, the controlling the robot to leave the current travel path specifically further includes:
when the safety side does not exist behind the robot, acquiring a history track of the robot entering a current travelling channel;
the control robot is retracted from the current travel path based on the historical track.
In a second aspect, there is provided a robotic navigation device, the device comprising:
the acquisition module is used for acquiring obstacle information in the running process of the robot;
the processing module is used for determining the channel type of the current travel channel of the robot according to the obstacle information;
the acquisition module is also used for acquiring a navigation strategy corresponding to the channel type of the current travelling channel;
and the control module is used for controlling the robot to run based on the navigation strategy corresponding to the channel type of the current running channel.
In a third aspect, a robot is provided comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the method according to the first aspect when executing the computer program.
In a fourth aspect, a computer-readable storage medium is provided, the computer-readable storage medium storing a computer program, which when executed by a processor, implements the method according to the first aspect.
The embodiment of the invention has the following beneficial effects:
1. the invention determines the type of the channel where the robot is currently located by acquiring and processing the obstacle information in the running process of the robot in real time, and acquires different navigation strategies according to different channel types to control the running of the robot.
2. The current channel type is determined by acquiring the distribution condition of the point cloud data at the two sides of the obstacle, and the method is simple and effective to realize;
3. because of the special material properties of some barriers, such as glass and other materials, or because of the limitation of the installation position and structure of the sensor, the blind area of the sensor is provided with barriers, so that the sensor cannot detect the point cloud data;
4. for some cliffs and other scenes, the point cloud data are difficult to acquire, and based on the point cloud data, the invention acquires edge position data through a sensor arranged at the bottom of the robot, and determines the channel type based on the edge position data and the point cloud data of the obstacle at the other side;
5. according to the invention, different navigation strategies of different channel types are realized, for the V-shaped channel, the front is narrower, so that the robot is controlled to leave directly, and for the I-shaped channel, the arc-shaped channel and the polygonal channel, the navigation strategy is determined according to actual conditions, and when the leaving condition is met, the robot is controlled to leave;
6. when the robot is controlled to leave the current channel, whether the robot can rotate is judged, when the robot cannot rotate, the robot is controlled to leave based on point cloud data of the rear safety side, and when the robot does not have the rear safety side, the robot is controlled to leave through a historical track, so that the robot can be ensured to stably retreat.
7. According to the invention, when the robot is controlled to leave based on the point cloud data of the safety side, the robot is controlled according to the priority order of the safety side, and the stability of the robot during backward movement is further ensured.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is an exemplary flow chart of a method of robotic navigation according to an embodiment of the present disclosure;
FIG. 2 is a schematic view of a V-channel scenario in accordance with an embodiment of the present disclosure;
FIG. 3 is a schematic view of an arcuate channel scene in accordance with an embodiment of the present disclosure;
FIG. 4 is a schematic view of a robotic travel scenario according to an embodiment of the present disclosure;
FIG. 5 is a schematic view of another driving scenario of a robot according to an embodiment of the disclosure;
fig. 6 is a schematic structural view of a robot navigation device according to an embodiment of the present disclosure;
fig. 7 is a schematic structural view of a robot according to an embodiment of the present disclosure.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
As described in the background art, at present, when a sweeping robot runs in a sweeping work, the robot may encounter other scenes besides common obstacles such as a table, a chair, stairs, and the like, for example: in the V/U/S channel scenes formed by different furniture such as a closestool, a bath glass door, a closestool, a washing table and the like, the sweeping robot is easy to be trapped or misjudged because no corresponding control algorithm exists in the special scenes, and the user experience is affected.
In order to solve the problem that the robots are trapped in different channel scenes, the applicant creatively thinks that some scenes of the robots which are easy to be trapped can be classified, generalized and summarized, and a processing method corresponding to the different scenes is provided, so that the subsequent robots can rapidly judge the environment when cleaning, different obstacle avoidance methods can be executed, and the risk of the robots being trapped is reduced.
Fig. 1 shows an exemplary flowchart of a robot navigation method according to an embodiment of the present disclosure, which is described in detail as follows:
step 101, obtaining obstacle information in the running process of the robot.
In some embodiments, the obstacle information includes point cloud data of the obstacle.
Specifically, the point cloud data of the obstacle can be acquired by a sensing element installed on the robot body.
The above-mentioned sensing element is used for collecting some motion parameters of the robot and various data of environmental space, and may be one or more of sensors such as a laser radar and a camera, and it should be understood that the sensing element is not limited thereto, and those skilled in the art may select a corresponding sensing element according to actual requirements. Moreover, the person skilled in the art can also install different sensing elements at different positions (such as right front, side, bottom, etc.) of the robot according to actual requirements so as to acquire data of different positions.
For example, if the sensing element includes a laser radar, in order to acquire the point cloud data of the obstacle, the laser radar may be installed directly in front of the robot and acquire the obstacle information through continuous scanning, and the specific process of acquiring the point cloud data by using the laser radar is as follows:
the laser radar comprises a laser and a receiving system, wherein the laser generates and emits light pulses, and when an obstacle exists, the light pulses can strike the obstacle and reflect back, and finally are received by a receiver. The receiver can accurately measure the propagation time of the light pulse from the emission to the reflection back. In view of the known speed of light, the distance to the obstacle can be calculated, and the three-dimensional coordinates of the indicating light spot of each obstacle, namely the point cloud data, can be accurately calculated by combining the height of the laser and the laser scanning angle.
Step 102, determining the channel type of the current running channel of the robot according to the obstacle information.
After the point cloud data of the obstacle are obtained, the shape of the obstacle can be determined according to the point cloud data of the obstacle, so that the channel type of the current running channel of the robot can be determined.
In some embodiments, before determining the channel type of the current travel channel of the robot according to the obstacle information, the method may further include the following processing steps:
and judging whether the point cloud data of the obstacle belong to two sides of the robot body.
When determining that the point cloud data of the obstacles at the two sides of the robot body are acquired, determining the channel type of the current travel channel of the robot according to the obstacle information may specifically include:
determining the channel type of the current travelling channel according to the distribution condition of the point cloud data of the obstacles on the two sides of the robot body; the channel type at least comprises any one of an I-type channel, a V-type channel, an arc-shaped channel and a polygonal channel.
Specifically, in this embodiment, if the point cloud data of the obstacles at two sides of the machine body are straight and approximately parallel, it is indicated that the obstacles at two sides of the current robot are relatively parallel, for example, between cabinets and aisles, and between walls, then the current channel may be determined to be an I-type channel.
If the point clouds of the obstacles on both sides of the body are gradually contracted, it is stated that the current robot is gradually narrowed on both sides, for example, between the bottom of furniture, the toilet bowl and the wall (refer to fig. 2), then the current channel can be determined as a V-shaped channel.
If the point cloud data of the obstacles at two sides of the machine body are distributed in a curve and the width of the obstacle is close to that of the machine body, it is indicated that the obstacles at two sides of the robot are arc-shaped, for example, between a toilet side and a wall (refer to fig. 3), and the current channel can be determined to be an arc-shaped channel.
If the point cloud data of the obstacles on the two sides of the machine body are distributed in a bulk and regular manner and the distances between adjacent point clouds are close, the current robot is in a polygonal area, such as a table and chair leg combination and a complex tea table leg, and the current channel can be determined to be a polygonal channel.
The method can rapidly judge the shape of the obstacle by acquiring the distribution condition of the point cloud data on the two sides of the obstacle, so that the current channel type is determined, and the method is simple and effective to realize.
In some embodiments, considering that the influence of reflection of a part of material (such as a mirror or glass) makes the sensing element unable to obtain the point cloud data, or the sensor mounting position and the structural limitation cause that an obstacle appears in a dead zone of the sensor so that the sensor cannot obtain the point cloud data, based on this, the application also provides a technical scheme for determining the shape of the obstacle by utilizing the collision position data, which specifically includes the following steps:
detecting whether collision position data exist in real time when the point cloud data of the obstacle at any side of the robot body do not exist; the collision position data is obtained by real-time recording after the robot collides with the obstacle;
determining a shape of the obstacle on the collision side based on the collision position data when the collision position data is detected;
the channel type is determined based on the shape of the obstacle on the collision side and the distribution of the point cloud data of the obstacle on the other side.
The collision position data may be obtained by a sensor element (e.g. a pressure sensor) mounted on the side of the robot.
For example, taking a pressure sensor as an example, referring to the scenario of fig. 3, when the robot travels between the toilet bowl and the glass, since the laser radar installed in front of the robot cannot acquire the point cloud data on the glass side, at this time, when the pressure sensor is installed on the side of the robot body, if the robot collides on one side during the traveling, the pressure data acquired by the pressure sensor installed on the side will change, and the position of the robot on the map when the robot collides, that is, the collision position data, can be determined according to the changed pressure data. After the collision position data is obtained, the shape of the obstacle on the collision side can be approximately determined according to the collision position data of different points, and then the channel type (such as the channel type of the robot in fig. 3 is an arc-shaped channel) is determined according to the shape of the obstacle on the collision side and the distribution of the point cloud data of the obstacle (toilet) on the other side, so that whether the vehicle is still driven continuously or not can be determined.
Referring to the scene of fig. 4, the left side of the robot is a wall, and the right side is a step, in which the laser radar can only acquire the point cloud data of the wall side, and the step is in the sensor vision blind area, so that the point cloud data cannot be acquired. At this time, the pressure sensor installed on the right side of the robot can be used to obtain pressure data, so as to determine the collision position, further determine the shape of the step, and determine the channel type (i.e. the channel type where the robot is in is an I-type channel in fig. 4) by using the shape of the step and the distribution of the point cloud data on the wall side, so that whether the vehicle is going on in the following process can be determined.
In addition, in order to improve the operation efficiency of the robot, the scheme further comprises the following steps:
virtual exclusion zones are determined and marked based on the collision location data.
Therefore, if the scene is cleaned again, the scene can be cleaned along the virtual exclusion zone position, so that the working efficiency can be improved.
In some embodiments, besides obstacles such as mirrors, glass, steps, etc., scenes such as cliffs may also appear in real life, and point cloud data cannot be acquired in the scenes, so that a supplementary technical scheme is provided, and the specific process is as follows:
detecting whether edge position data exist in real time when point cloud data of an obstacle on any side of the robot body do not exist; the edge position data are acquired and processed in real time by a sensor arranged at the bottom of the robot;
when the edge position data is detected, the channel type is determined based on the distribution of the edge position data and the point cloud data of the obstacle on the other side.
For example, the edge position data may be acquired and processed by an infrared sensor installed at the bottom of the robot. Referring to fig. 5, the robot travels in a scene with a wall on one side and a cliff on the other side. The infrared sensor installed on the right side of the bottom of the robot acquires the height of the cliff below the robot through the reflected signal, and determines the edge position data of the robot in the map based on the height information, so that the channel type can be determined according to the distribution of the edge position data and the point cloud data of the obstacle on the wall side, and the channel type where the robot is located is an I-shaped channel in fig. 5.
In addition to the above two cases, a combination of multiple scenes may also occur, such as a combination scene of solid obstacle, glass, cliff, and when such a scene occurs, the current channel type may be determined quickly according to the above execution steps.
Step 103, acquiring a navigation strategy corresponding to the channel type of the current travelling channel and controlling the robot to travel based on the navigation strategy.
Since the navigation strategies corresponding to the different channel types are different, the step 103 specifically includes:
if the channel type of the current travel channel of the robot is a V-shaped channel, controlling the robot to leave the current travel channel;
if the channel type of the current travel channel of the robot is any one of an I-type channel, an arc-shaped channel and a polygonal channel, judging whether a leaving condition is met, and controlling the robot to leave the current travel channel when the leaving condition is confirmed to be met.
In this embodiment, when the robot encounters the V-shaped channel, it is indicated that the front of the robot is gradually narrowed, which is equivalent to encountering a dead person, and the robot is very easy to get trapped when reentering the robot at this time, so that the robot is directly controlled to leave when encountering the V-shaped channel. When the robot encounters the I-shaped channel, the arc-shaped channel and the polygonal channel, the judgment is needed according to the actual situation, and the I-shaped channel, the arc-shaped channel and the polygonal channel are not dead as the V-shaped channel and still have the possibility of passing through, so that the robot is controlled to leave only when the leaving condition is judged to be met for specific analysis of the specific problem.
In addition, in order to improve the operating efficiency of robot, follow-up no longer cleans V type passageway, this scheme still includes the following step:
if the channel type of the current travel channel of the robot is a V-shaped channel, recording the position of the current robot and determining the position as a virtual forbidden zone.
Therefore, if the scene is cleaned again later, the current scene can be known according to the marked virtual forbidden zone position without cleaning, so that the current scene can be directly separated from the virtual forbidden zone, and the working efficiency is improved.
On the basis of the above embodiment, the control of the robot leaving the current travel path specifically includes the following steps:
judging whether the robot can rotate or not;
when the robot cannot rotate, acquiring point cloud data of a safety side behind the robot;
controlling the robot to retreat from the current travelling channel based on the point cloud data of the safety side;
when the safety side does not exist behind the robot, acquiring a history track of the robot entering a current travelling channel;
the control robot is retracted from the current travel path based on the historical track.
Wherein the safety side includes any one of a boundary of an obstacle, a boundary of a marked virtual exclusion zone, and a boundary of a cliff determined based on edge position data.
Because most robots in the market do not have a backward algorithm, when the robot is judged to need to leave, whether the robot can rotate or not can be determined, and if the robot can rotate, the robot can leave after rotating by a corresponding angle. However, in addition to circular machines, a substantial portion of the machines on the market are profiled machines for which rotation is only possible when the radius of rotation is smaller than the radius of the channel. Based on the method, a backward algorithm is added to control the robot to leave, and the robot can travel according to the point cloud data of the safety side when leaving.
For example, referring to fig. 4, the robot may be retracted based on the point cloud data of the wall side, or may be retracted according to the boundary of the step (the boundary of the virtual exclusion zone determined by the collision position data). In addition, considering scene diversity, the situation that a safety side does not exist may occur, and the user can also back based on the historical track at the moment, and the back task can also be completed well.
On the basis of the above embodiment, the present solution may further include the following steps:
when a plurality of safety sides are detected, acquiring priority orders of all the safety sides;
and controlling the robot to back off the current travelling channel based on the point cloud data of the safety side with the highest priority based on the priority sequence of the safety side.
Wherein, since the safety side is not unique, the boundary of the obstacle, the boundary of the marked virtual exclusion zone, the boundary of the cliff determined based on the edge position data, and the like are all safety sides. However, according to the actual situation, the backing is the most stable based on the boundary of the obstacle, and then the security sides may be prioritized, for example: the boundary priority of the barrier is higher than the boundary of the marked virtual exclusion zone, the boundary of the marked virtual exclusion zone is higher than the priority of the cliff, and when a plurality of safety sides are simultaneously present, the safety side with the highest priority can be selected for backing according to the priority, so that the stability of the backing-off opportunity robot is further ensured.
For example, referring to fig. 4, the priority of the wall side is higher than the priority of the step side, and then, when backing up, the point cloud data on the wall side can be backed up; referring to fig. 5, similarly, the priority of the wall side is higher than the priority of the cliff side, and when the robot backs up, the robot may also back up based on the point cloud data of the wall side, so that the stability of the back-up timing robot may be further ensured.
In some embodiments, determining that the departure condition is satisfied comprises: determining that the front of the robot is dead; the method specifically comprises the following steps:
detecting whether the channel distance in front of the robot advancing channel is smaller than the width of the robot body;
when the robot is smaller than the preset value, determining that the front of the robot is dead; or alternatively, the first and second heat exchangers may be,
detecting whether a cliff or a virtual forbidden zone exists in front of the robot;
if cliffs or virtual forbidden areas exist, the robot is determined to be dead in front of the travel.
Wherein, when the channel distance in front of the robot traveling channel is smaller than the body width of the robot, it indicates that the robot has failed to pass smoothly, and thus, can be determined as dead-beard even if there is no obstacle in front; in addition, the cliff/virtual exclusion zone also belongs to a scene where the robot cannot pass, and is thus also confirmed as dead-end. In addition, the method comprises the following steps:
detecting whether an obstacle meeting a preset condition exists in front of the robot traveling;
when the robot is in existence, determining that the front of the robot is dead; or alternatively, the first and second heat exchangers may be,
detecting whether the collision times of the robot during traveling are larger than a preset value or not;
when the robot travel front is determined to be the same as the dead robot travel front when the robot travel front is larger than the preset value.
Wherein, the obstacle meeting the preset condition, namely the obstacle which the robot can not smoothly bypass or cross, can confirm that the front part is dead when encountering the obstacle; in addition, when the number of times of collision is greater than the preset value, the current driving area is shown to be narrow, and the robot can default to be dead in order to be safe, so that the escape is realized.
The invention determines the type of the channel where the robot is currently located by acquiring and processing the obstacle information in the running process of the robot in real time, and acquires different navigation strategies according to different channel types to control the running of the robot.
With continued reference to fig. 6, as an implementation of the method shown in fig. 1 described above, there is provided an embodiment of a robot navigation device, which corresponds to the method embodiment shown in fig. 1, and as shown in fig. 6, the robot navigation device of this embodiment includes:
an acquisition module 601, configured to acquire obstacle information during a robot traveling process;
a processing module 602, configured to determine a channel type of a current travel channel of the robot according to the obstacle information;
the acquiring module 601 is further configured to acquire a navigation policy corresponding to a channel type of a current travel channel;
a control module 603 for controlling the robot travel based on a navigation strategy corresponding to the channel type of the current travel channel.
In some optional implementations of the present embodiment, the obstacle information includes point cloud data of the obstacle; the processing module 602 is specifically configured to:
after the point cloud data of the obstacle are acquired, judging whether the point cloud data of the obstacle belong to two sides of the robot body or not;
when the point cloud data of the obstacles at the two sides of the robot body are acquired, determining the channel type of the current travelling channel according to the distribution condition of the point cloud data of the obstacles at the two sides of the robot body;
the channel type at least comprises any one of an I-type channel, a V-type channel, an arc-shaped channel and a polygonal channel.
In some optional implementations of this embodiment, the processing module 602 is specifically configured to:
detecting whether collision position data exist in real time when the point cloud data of the obstacle at any side of the robot body do not exist; the collision position data is obtained by real-time recording after the robot collides with the obstacle;
determining a shape of the obstacle on the collision side based on the collision position data when the collision position data is detected;
the channel type is determined based on the shape of the obstacle on the collision side and the distribution of the point cloud data of the obstacle on the other side.
In some optional implementations of this embodiment, the processing module 602 is specifically further configured to:
detecting whether edge position data exist in real time when point cloud data of an obstacle on any side of the robot body do not exist; the edge position data are acquired in real time by a sensor arranged at the bottom of the robot;
when the edge position data is detected, the channel type is determined based on the distribution of the edge position data and the point cloud data of the obstacle on the other side.
In some optional implementations of this embodiment, the control module 603 is specifically configured to:
if the channel type of the current travel channel of the robot is a V-shaped channel, controlling the robot to leave the current travel channel;
if the channel type of the current travel channel of the robot is any one of an I-type channel, an arc-shaped channel and a polygonal channel, judging whether a leaving condition is met, and controlling the robot to leave the current travel channel when the leaving condition is confirmed to be met.
In some optional implementations of this embodiment, the control module 603 is specifically further configured to:
determining that the front of the robot is dead; the method specifically comprises the following steps:
detecting whether the channel distance in front of the robot advancing channel is smaller than the width of the robot body;
when the robot is smaller than the preset value, determining that the front of the robot is dead; or alternatively, the first and second heat exchangers may be,
detecting whether a cliff or a forbidden zone exists in front of the robot;
if cliffs or virtual forbidden areas exist, the robot is determined to be dead in front of the travel.
In some optional implementations of this embodiment, the control module 603 is specifically further configured to:
judging whether the robot can rotate or not;
when the robot cannot rotate, acquiring point cloud data of a safety side behind the robot;
controlling the robot to retreat from the current travelling channel based on the point cloud data of the safety side;
wherein the safety side includes any one of a boundary of an obstacle, a boundary of a marked virtual exclusion zone, and a boundary of a cliff determined based on edge position data.
In some optional implementations of this embodiment, the control module 603 is specifically further configured to:
when a plurality of safety sides are detected, acquiring priority orders of all the safety sides;
the control robot backing off the current travel channel based on the point cloud data of the safety side specifically comprises:
and controlling the robot to back off the current travelling channel based on the point cloud data of the safety side with the highest priority based on the priority sequence of the safety side.
In some optional implementations of this embodiment, the control module 603 is specifically further configured to:
when the safety side does not exist behind the robot, acquiring a history track of the robot entering a current travelling channel;
the control robot is retracted from the current travel path based on the historical track.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic of each process, and should not limit the implementation process of the embodiment of the present application in any way.
Fig. 7 discloses a schematic diagram of a robot according to an embodiment of the present invention. As shown in fig. 7, the robot includes: a memory 71, a processor 72 and a computer program 73 stored in the memory 71 and executable on the processor 72, for example a program for a robot navigation method. The steps of one embodiment of the robot navigation method described above, such as steps 101 through 103 shown in fig. 1, are implemented when the processor 72 executes the computer program 73. Alternatively, the processor 72, when executing the computer program 73, implements the functions of the modules in one embodiment of the robotic navigation device described above, such as the functions of the modules 601-603 shown in fig. 6. The robot further includes a measuring element 74 and a movement unit 75.
The measurement element 74 may be a radar, sensor, or the like; wherein the radar can be a laser radar or an infrared radar, and the laser radar can be a single-line radar or a multi-line radar.
The movement unit 75 is used to control the robot movement.
The processor 72 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 71 may be an internal storage unit of the robot, such as a hard disk or a memory of the robot. The memory 71 may also be an external storage device of the robot, such as a plug-in hard disk, smart Media Card (SMC), secure digital (SecureDigital, SD) Card, flash Card (Flash Card) or the like, which are provided on the robot. Further, the memory 71 may also include both an internal memory unit and an external memory device of the one robot. The memory 71 is used for storing the computer program and other programs and data required for the one robot. The memory 71 may also be used for temporarily storing data that has been output or is to be output.
It will be appreciated by those skilled in the art that fig. 7 is merely an example of one type of robot and is not meant to be limiting, and may include more or fewer components than shown, or may combine certain components, or different components, e.g., the one type of robot may also include input and output devices, network access devices, buses, etc.
It should be noted that, because the content of information interaction and execution process between the above devices/units is based on the same concept as the method embodiment of the present application, specific functions and technical effects thereof may be referred to in the method embodiment section, and will not be described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
Embodiments of the present application also provide a computer readable storage medium storing a computer program which, when executed by a processor, implements steps that may implement the various method embodiments described above.
Program portions of the technology may be considered to be "products" or "articles of manufacture" in the form of executable code and/or associated data, embodied or carried out by a computer readable medium. A tangible, persistent storage medium may include any memory or storage used by a computer, processor, or similar device or related module. Such as various semiconductor memories, tape drives, disk drives, or the like, capable of providing storage functionality for software.
All or a portion of the software may sometimes communicate over a network, such as the internet or other communication network. Such communication may load software from one computer device or processor to another. Thus, another medium capable of carrying software elements may also be used as a physical connection between local devices, such as optical, electrical, electromagnetic, etc., propagating through cable, optical cable, air, etc. Physical media used for carrier waves, such as electrical, wireless, or optical, may also be considered to be software-bearing media. Unless limited to a tangible "storage" medium, other terms used herein to refer to a computer or machine "readable medium" mean any medium that participates in the execution of any instructions by a processor.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the invention, which are described in detail and are not to be construed as limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention. Accordingly, the scope of protection of the present invention is to be determined by the appended claims.

Claims (12)

1. A method of robotic navigation, the method comprising:
acquiring obstacle information in the running process of the robot;
determining the channel type of the current running channel of the robot according to the obstacle information;
and acquiring a navigation strategy corresponding to the channel type of the current travelling channel and controlling the robot to travel based on the navigation strategy.
2. The method of claim 1, wherein the obstacle information comprises point cloud data of an obstacle; after obtaining the point cloud data of the obstacle, the method further comprises:
judging whether the point cloud data of the obstacle belong to two sides of the robot body or not;
the determining the channel type of the current travel channel of the robot according to the obstacle information specifically comprises the following steps:
when the point cloud data of the obstacles at the two sides of the robot body are acquired, determining the channel type of the current travelling channel according to the distribution condition of the point cloud data of the obstacles at the two sides of the robot body;
the channel type at least comprises any one of an I-type channel, a V-type channel, an arc-shaped channel and a polygonal channel.
3. The method according to claim 2, wherein the determining the channel type of the current travel channel of the robot according to the obstacle information specifically further comprises:
detecting whether collision position data exist in real time when the point cloud data of the obstacle at any side of the robot body do not exist; the collision position data are recorded in real time after the robot collides with the obstacle;
determining a shape of the obstacle on the collision side based on the collision position data when the collision position data is detected;
the channel type is determined based on the shape of the obstacle on the collision side and the distribution of the point cloud data of the obstacle on the other side.
4. The method according to claim 2, wherein the determining the channel type of the current travel channel of the robot according to the obstacle information specifically further comprises:
detecting whether edge position data exist in real time when point cloud data of an obstacle on any side of the robot body do not exist; the edge position data are acquired and processed in real time by a sensor arranged at the bottom of the robot;
when the edge position data is detected, the channel type is determined based on the distribution of the edge position data and the point cloud data of the obstacle on the other side.
5. The method according to any one of claims 2 to 4, wherein controlling the robot travel based on the navigation strategy corresponding to the channel type of the current travel channel comprises:
if the channel type of the current travel channel of the robot is a V-shaped channel, controlling the robot to leave the current travel channel;
if the channel type of the current travel channel of the robot is any one of an I-type channel, an arc-shaped channel and a polygonal channel, judging whether a leaving condition is met, and controlling the robot to leave the current travel channel when the leaving condition is confirmed to be met.
6. The method of claim 5, wherein the determining that the departure condition is satisfied comprises: determining that the front of the robot is dead; the method specifically comprises the following steps:
detecting whether the channel distance in front of the robot advancing channel is smaller than the width of the robot body;
when the robot is smaller than the preset value, determining that the front of the robot is dead; or alternatively, the first and second heat exchangers may be,
detecting whether a cliff or a virtual forbidden zone exists in front of the robot;
if cliffs or virtual forbidden areas exist, the robot is determined to be dead in front of the travel.
7. The method according to claim 5, wherein controlling the robot to leave the current travel path comprises:
judging whether the robot can rotate or not;
when the robot cannot rotate, acquiring point cloud data of a safety side behind the robot;
controlling the robot to retreat from the current travelling channel based on the point cloud data of the safety side;
wherein the safety side includes any one of a boundary of an obstacle, a boundary of a marked virtual exclusion zone, and a boundary of a cliff determined based on edge position data.
8. The method of claim 7, wherein the method further comprises:
when a plurality of safety sides are detected, acquiring priority orders of all the safety sides;
the control robot backing off the current travel channel based on the point cloud data of the safety side specifically comprises:
and controlling the robot to back off the current travelling channel based on the point cloud data of the safety side with the highest priority based on the priority sequence of the safety side.
9. The method of claim 8, wherein controlling the robot to leave the current travel path specifically further comprises:
when the safety side does not exist behind the robot, acquiring a history track of the robot entering a current travelling channel;
the control robot is retracted from the current travel path based on the historical track.
10. A robotic navigation device, the device comprising:
the acquisition module is used for acquiring obstacle information in the running process of the robot;
the processing module is used for determining the channel type of the current travel channel of the robot according to the obstacle information;
the acquisition module is also used for acquiring a navigation strategy corresponding to the channel type of the current travelling channel;
and the control module is used for controlling the robot to run based on the navigation strategy corresponding to the channel type of the current running channel.
11. A robot comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any one of claims 1-9 when executing the computer program.
12. A computer readable storage medium storing a computer program, which when executed by a processor performs the method of any one of claims 1 to 9.
CN202210095188.7A 2022-01-26 2022-01-26 Robot navigation method, device, robot and storage medium Pending CN116540690A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210095188.7A CN116540690A (en) 2022-01-26 2022-01-26 Robot navigation method, device, robot and storage medium
PCT/CN2022/137568 WO2023142710A1 (en) 2022-01-26 2022-12-08 Robot navigation method and apparatus, robot and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210095188.7A CN116540690A (en) 2022-01-26 2022-01-26 Robot navigation method, device, robot and storage medium

Publications (1)

Publication Number Publication Date
CN116540690A true CN116540690A (en) 2023-08-04

Family

ID=87449348

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210095188.7A Pending CN116540690A (en) 2022-01-26 2022-01-26 Robot navigation method, device, robot and storage medium

Country Status (2)

Country Link
CN (1) CN116540690A (en)
WO (1) WO2023142710A1 (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2017370742A1 (en) * 2016-12-09 2019-06-20 Diversey, Inc. Robotic cleaning device with operating speed variation based on environment
CN110032186A (en) * 2019-03-27 2019-07-19 上海大学 A kind of labyrinth feature identification of anthropomorphic robot and traveling method
CN110554696B (en) * 2019-08-14 2023-01-17 深圳银星智能集团股份有限公司 Robot system, robot and robot navigation method based on laser radar
CN111984014B (en) * 2020-08-24 2024-06-18 上海高仙自动化科技发展有限公司 Robot control method and device, robot and storage medium
CN113607162B (en) * 2021-10-09 2021-12-28 创泽智能机器人集团股份有限公司 Path planning method and device based on three-dimensional map
CN113848933B (en) * 2021-10-11 2024-04-26 徐州徐工环境技术有限公司 All-dimensional obstacle avoidance method and device for cleaning robot

Also Published As

Publication number Publication date
WO2023142710A1 (en) 2023-08-03

Similar Documents

Publication Publication Date Title
CN110632921B (en) Robot path planning method and device, electronic equipment and storage medium
US20230015411A1 (en) Method of travel control, device and storage medium
CN108007452B (en) Method and device for updating environment map according to obstacle and robot
CN110794831B (en) Method for controlling robot to work and robot
KR102599597B1 (en) Path sweeping method, system and chip of cleaning robot
EP1312937A2 (en) System and method for monitoring vehicle outside
CN110850885A (en) Autonomous robot
CN112806912B (en) Robot cleaning control method and device and robot
CN111650933A (en) Control robot escaping method, device, terminal and readable storage medium
CN109725328B (en) AGV obstacle detection system and method based on laser radar sensor
CN211559963U (en) Autonomous robot
CN112214026A (en) Driving obstacle detection method and device, vehicle and readable medium
CN112445225A (en) Collision avoidance system, method of automatic collision avoidance, and non-transitory computer readable medium
CN114779777A (en) Sensor control method and device for self-moving robot, medium and robot
CN116540690A (en) Robot navigation method, device, robot and storage medium
CN113876246A (en) Control method for visual obstacle avoidance of mechanical arm of intelligent cleaning robot
Shvets et al. Occupancy grid mapping with the use of a forward sonar model by gradient descent
CN112711250B (en) Self-walking equipment movement control method and self-walking equipment
CN113768420B (en) Sweeper and control method and device thereof
CN115755935A (en) Method for filling indoor map obstacles
CN116551663A (en) Robot control method, device, robot and storage medium
CN114518744A (en) Robot escaping method and device, robot and storage medium
JP2018118595A (en) Obstacle detection device in railway crossing
CN116125964A (en) Robot navigation method, device, robot and storage medium
JP2017111529A (en) Obstacle determination device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination