CN114019951B - Robot control method and device, robot and readable storage medium - Google Patents

Robot control method and device, robot and readable storage medium Download PDF

Info

Publication number
CN114019951B
CN114019951B CN202111169113.0A CN202111169113A CN114019951B CN 114019951 B CN114019951 B CN 114019951B CN 202111169113 A CN202111169113 A CN 202111169113A CN 114019951 B CN114019951 B CN 114019951B
Authority
CN
China
Prior art keywords
robot
obstacle
target
laser radar
touch
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111169113.0A
Other languages
Chinese (zh)
Other versions
CN114019951A (en
Inventor
夏俊超
梁康华
杨永森
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yunjing Intelligence Technology Dongguan Co Ltd
Yunjing Intelligent Shenzhen Co Ltd
Original Assignee
Yunjing Intelligence Technology Dongguan Co Ltd
Yunjing Intelligent Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yunjing Intelligence Technology Dongguan Co Ltd, Yunjing Intelligent Shenzhen Co Ltd filed Critical Yunjing Intelligence Technology Dongguan Co Ltd
Priority to CN202111169113.0A priority Critical patent/CN114019951B/en
Publication of CN114019951A publication Critical patent/CN114019951A/en
Application granted granted Critical
Publication of CN114019951B publication Critical patent/CN114019951B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Optics & Photonics (AREA)
  • Electromagnetism (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a control method and device of a robot, the robot and a readable storage medium, wherein the robot comprises a laser radar, and the method comprises the following steps: when the laser radar touches an obstacle, controlling the robot to execute target behaviors, wherein the target behaviors comprise the target time length of reverse movement of the robot, and then advancing a target distance; in the process of executing the target behavior, if the laser radar touches the obstacle again, repeatedly executing the target behavior; wherein the reverse motion is a motion opposite to a motion of the robot before touching the obstacle. According to the invention, when the laser radar of the robot collides with the obstacle, the robot behavior when the laser radar touches the obstacle can be controlled more reasonably, so that the movement of the robot can be prevented from being blocked by the obstacle, and the cleaning missing condition of the area where the obstacle is located can be reduced.

Description

Robot control method and device, robot and readable storage medium
Technical Field
The present invention relates to the field of robots, and in particular, to a method and apparatus for controlling a robot, and a readable storage medium.
Background
The robot is usually provided with a laser radar, and the laser radar is usually arranged on the upper surface of the robot in a protruding mode, so that environment mapping, navigation and other applications are realized by adopting a protruding structure. If an obstacle higher than the upper surface of the robot but not higher than the laser radar exists in the room during the movement of the robot, the obstacle may block the movement of the robot and cause the laser radar of the robot to be collided or jam the robot.
In the prior art, when the laser radar collides with an obstacle, the robot is usually controlled to avoid the obstacle, but the obstacle avoidance mode can cause the situation that the cleaning is missed in the area where the obstacle is located.
Disclosure of Invention
The invention mainly aims to provide a control method and device for a robot, the robot and a readable storage medium, which can more reasonably control the robot behavior when a laser radar touches an obstacle, can avoid the movement of the robot from being blocked by the obstacle, and can reduce the cleaning missing condition of the area where the obstacle is located.
To achieve the above object, the present invention provides a control method of a robot including a lidar, the method comprising:
When the laser radar touches an obstacle, controlling the robot to execute target behaviors, wherein the target behaviors comprise target time length of reverse movement of the robot, and then advancing a target distance;
in the process of executing the target behavior, if the laser radar is detected to touch the obstacle again, repeating the target behavior;
wherein the reverse motion is a motion opposite to a motion of the robot before touching the obstacle.
Optionally, the target duration includes a first duration and a second duration, and the robot reversely moves the target duration includes:
the robot rotates the first time period in reverse and retreats the second time period,
or the robot retreats for the second time period and reversely rotates for the first time period,
or the robot reversely rotates the first time period while retreating the second time period.
Optionally, before controlling the robot to reverse the moving target duration, the method further includes:
determining a historical touch position of the robot according to the historical motion information of the robot before touching the obstacle;
determining a target rotation direction for reverse rotation according to the historical touch position;
The robot counter-rotating the first duration includes:
and reversely rotating the first time period according to the target rotation direction.
Optionally, the robot reversely rotates for the first period of time, including:
if the robot moves along the navigation path, determining a target rotation direction according to the path direction;
and reversely rotating the first time period according to the target rotation direction.
Optionally, the robot reverse-moving target duration includes:
determining an average angular velocity and/or an average linear velocity in a preset time period before the laser radar touches the obstacle;
and controlling the robot to reversely move the target duration according to the average angular speed and/or the average linear speed.
Optionally, the method further comprises:
recording the touch times of detecting that the laser radar touches the obstacle;
and if the touch times are greater than or equal to the preset times, ending executing the target behavior after executing the reverse movement.
Optionally, the method further comprises:
acquiring a touch azimuth when the laser radar touches the obstacle, and acquiring target motion information of a robot;
determining an obstacle profile according to the touch azimuth and the target motion information;
And drawing an obstacle map according to the outline of the obstacle.
Optionally, the target motion information includes a motion speed of the robot, and determining an obstacle profile according to the touch azimuth and the target motion information includes:
determining a target coordinate position of the obstacle point in a robot coordinate system according to the touch azimuth and the movement speed;
converting the target coordinate position into a world coordinate system to obtain an actual coordinate position of the obstacle point;
and determining the outline of the obstacle according to a plurality of actual coordinate positions when the obstacle is touched for a plurality of times.
In addition, in order to achieve the aim, the invention also provides a control device of the robot,
the control device of the robot comprises a behavior execution module and a redetection module, wherein:
the behavior execution module is used for controlling the robot to execute target behaviors when detecting that the laser radar touches an obstacle, wherein the target behaviors comprise the target time length of reverse movement of the robot, and then the robot advances a target distance;
the re-detection module is used for repeatedly executing the target behavior if the laser radar is re-detected to touch the obstacle in the process of executing the target behavior; wherein the reverse motion is a motion opposite to a motion of the robot before touching the obstacle.
In addition, in order to achieve the above object, the present invention provides a robot including a memory, a processor, and a control program of the robot stored in the memory and executable on the processor, the control program of the robot implementing the steps of the control method of the robot described in any one of the above when executed by the processor.
In addition, in order to achieve the above object, the present invention also provides a computer-readable storage medium having stored thereon a control program of a robot, which when executed by a processor, implements the steps of the control method of the robot described in any one of the above.
According to the control method, the device, the robot and the readable storage medium for the robot, when the laser radar touches an obstacle, the robot is controlled to execute target behaviors, in the process of executing the target behaviors, if the laser radar touches the obstacle again, the target behaviors are repeatedly executed, the target behaviors comprise the target time length of the robot in the reverse motion, and then the target distance is advanced, wherein the reverse motion is the motion opposite to the motion of the robot before the obstacle is collided, the target time length of the robot in the reverse motion is enabled, and then the target distance is advanced, so that the robot can be caused to avoid the motion trend of the obstacle by a short distance and move along the obstacle, and when the laser radar touches the obstacle again, the robot is indicated to be close to the obstacle, in the process of repeatedly colliding the obstacle by the laser radar of the robot, the laser radar can avoid the obstacle by a short distance in the repeated reverse motion mode, and the area where the obstacle is located can be cleaned, and therefore the robot can be prevented from being reasonably controlled to touch the obstacle when the laser radar touches the obstacle, and the obstacle can be prevented from being leaked by the robot.
Drawings
FIG. 1 is a schematic view of a device according to an embodiment of the present invention;
FIG. 1A is a schematic diagram of a lidar touch barrier according to an embodiment of the present invention;
FIG. 2 is a flow chart of a first embodiment of a controller of the robot of the present invention;
FIG. 2A is a schematic diagram of a lidar touch barrier according to an embodiment of the present invention;
FIG. 3 is a flow chart of a second embodiment of a controller of the robot of the present invention;
FIG. 4 is a flow chart of a third embodiment of a control method of the robot of the present invention;
FIG. 5 is a flow chart of a fourth embodiment of a controller of the robot of the present invention;
FIG. 6 is a flow chart of a fifth embodiment of a controller of the robot of the present invention;
FIG. 7 is a schematic view of the control device of the robot according to the present invention;
FIG. 8 is a schematic diagram illustrating a touch location as a front middle touch according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of a touch location of a front left touch according to an embodiment of the present invention;
fig. 10 is a schematic diagram illustrating a touch direction of a front right touch according to an embodiment of the present invention;
FIG. 11 is a schematic diagram illustrating a touch location as a rear middle touch according to an embodiment of the present invention;
fig. 12 is a schematic diagram of a touch direction of a rear left touch according to an embodiment of the present invention;
Fig. 13 is a schematic diagram illustrating a touch direction of a rear right touch according to an embodiment of the present invention;
FIG. 14 is a schematic diagram of a laser radar and an obstacle entering a touch dead cycle in a scene according to an embodiment of the present invention;
fig. 15 is a schematic diagram of breaking a touch dead cycle of a lidar and an obstacle in a scenario according to an embodiment of the present invention.
Reference numerals illustrate:
the achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
With the continuous development of smart home technology, various smart home devices, robots, are one of them.
The robot related to the invention can comprise a cleaning robot, a logistics robot, a storage robot and the like, wherein the cleaning robot can be used for automatically cleaning the ground, and the application scene can be household indoor cleaning, large-scale place cleaning and the like.
Types of cleaning robots include floor sweeping robots, floor mopping robots, sweeping and mopping robots, and the like. The cleaning robot is provided with a cleaning assembly and a driving device. The cleaning robot is driven by the driving device to self-move along a set cleaning path and clean the floor through the cleaning assembly. For the robot of sweeping floor, clean subassembly includes sweeping floor subassembly and dust extraction, and in clean in-process, sweeping floor subassembly sweeps dust, rubbish etc. to dust extraction's dust absorption mouth to dust extraction absorbs dust, rubbish etc. temporarily, and sweeping floor subassembly can include the limit brush subassembly. For the floor mopping robot, the cleaning assembly comprises a mopping assembly, the mopping assembly is in contact with the ground, and the mopping member mops the ground in the moving process of the floor mopping robot, so that the floor is cleaned.
As shown in fig. 1, fig. 1 is a schematic structural view of a robot according to an embodiment of the present invention.
As shown in fig. 1, the apparatus may include: a processor 1001, such as a CPU, memory 1002, lidar 1004, and a communication bus 1003. Wherein the communication bus 1002 is used to enable connected communication between these components. The memory 1002 may be a high-speed RAM memory or a stable memory (non-volatile memory), such as a disk memory. The memory 1002 may alternatively be a storage device separate from the processor 1001 described above. The robot may also include other types of sensors, such as vision sensors, ground penetrating sensors, cliff sensors, collision sensors, distance sensors, drop sensors, counters, gyroscopes, and the like.
The laser radar is generally arranged on the top of the robot main body in a protruding mode, the laser radar rotates when in operation, a transmitter on the laser radar transmits laser signals, the laser signals are reflected by the obstacle, and a receiver of the laser radar receives the laser signals reflected by the obstacle. The circuit unit of the laser radar can obtain surrounding environment information, such as distance and angle of an obstacle relative to the laser radar, and the like, by analyzing the received laser signals.
In practical application, when the robot moves in the workplace, the robot may touch the obstacle, the laser radar is convexly arranged at the top of the robot main body, the robot may collide with the obstacle higher than the upper surface of the robot but not higher than the top of the laser radar, as shown in fig. 1A, the robot part main body is positioned under the obstacle, the laser radar may collide with the obstacle, if the robot is directly far away after touching the obstacle, the robot avoids the obstacle, and the situation of cleaning omission of the area under the obstacle may be caused.
It will be appreciated by those skilled in the art that the device structure shown in fig. 1 is not limiting of the device and may include more or fewer components than shown, or may be combined with certain components, or a different arrangement of components.
As shown in fig. 1, a control program of the robot may be included in a memory 1002 as a kind of computer storage medium.
In the apparatus shown in fig. 1, the processor 1001 may be used to call a control program of the robot stored in the memory 1002 and perform the following operations:
when the laser radar touches an obstacle, controlling the robot to execute target behaviors, wherein the target behaviors comprise target time length of reverse movement of the robot, and then advancing a target distance;
In the process of executing the target behavior, if the laser radar is detected to touch the obstacle again, repeating the target behavior;
wherein the reverse motion is a motion opposite to a motion of the robot before touching the obstacle.
Further, the target duration includes a first duration and a second duration, and the processor 1001 may call the control program of the robot stored in the memory 1002, and further perform the following operations:
the robot rotates the first time period in reverse and retreats the second time period,
or the robot retreats for the second time period and reversely rotates for the first time period,
or the robot reversely rotates the first time period while retreating the second time period.
Further, before the robot reverses the movement target period, the processor 1001 may call the control program of the robot stored in the memory 1002, and further perform the following operations:
determining a historical touch position of the robot according to the historical motion information of the robot before touching the obstacle;
determining a target rotation direction for reverse rotation according to the historical touch position;
the robot counter-rotating the first duration includes:
And reversely rotating the first time period according to the target rotation direction.
Further, the processor 1001 may call a control program of the robot stored in the memory 1002, and further perform the following operations:
if the robot moves along the navigation path, determining a target rotation direction according to the path direction;
and reversely rotating the first time period according to the target rotation direction.
Further, the processor 1001 may call a control program of the robot stored in the memory 1002, and further perform the following operations:
determining an average angular velocity and/or an average linear velocity in a preset time period before the laser radar touches the obstacle;
and controlling the robot to reversely move the target duration according to the average angular speed and/or the average linear speed.
Further, the processor 1001 may call a control program of the robot stored in the memory 1002, and further perform the following operations:
recording the touch times of detecting that the laser radar touches the obstacle;
and if the touch times are greater than or equal to the preset times, ending executing the target behavior after executing the reverse movement.
Further, the processor 1001 may call a control program of the robot stored in the memory 1002, and further perform the following operations:
Acquiring a touch azimuth when the laser radar touches the obstacle, and acquiring target motion information of a robot;
determining an obstacle profile according to the touch azimuth and the target motion information;
and drawing an obstacle map according to the outline of the obstacle.
Further, the processor 1001 may call a control program of the robot stored in the memory 1002, and further perform the following operations:
determining a target coordinate position of the obstacle point in a robot coordinate system according to the touch azimuth and the movement speed;
converting the target coordinate position into a world coordinate system to obtain an actual coordinate position of the obstacle point;
and determining the outline of the obstacle according to a plurality of actual coordinate positions when the obstacle is touched for a plurality of times.
Referring to fig. 2, a first embodiment of the present invention provides a control method of a robot, the method including:
step S10, when the laser radar touches an obstacle, controlling the robot to execute target behaviors, wherein the target behaviors comprise the target time length of reverse movement of the robot, and then advancing a target distance; wherein the reverse motion is a motion opposite to a motion of the robot before touching the obstacle;
The obstacle is an object that hinders the movement tendency of the robot.
The obstacle has a specific height, so that the obstacle is higher than the upper surface of the robot and not higher than the laser radar at the specific height, thereby touching the laser radar, and the robot is blocked and cannot go on when touching. Thus, it is necessary to avoid the obstacle for movement. The obstacles are for example: tea tables, sofas, beds, and the like.
Alternatively, a sensor, such as a pressure sensor, may be used to detect whether the lidar is touched.
Optionally, a cover body may be further disposed on the outer side of the laser radar, where the cover body touches the obstacle, and corresponds to the laser radar touching the obstacle. It should be noted that, the action that the obstacle blocks the lidar and thus blocks the movement trend of the robot can be regarded as that the obstacle touches the lidar, and is not limited to the fact that the body of the lidar is in direct contact with the obstacle.
The target behavior is a behavior that the laser radar moves according to the movement trend of avoiding the obstacle after touching the obstacle. The target behavior includes the robot moving the target length in the reverse direction and then advancing the target distance. The target duration is the motion duration of the reverse motion. The reverse motion is the motion opposite to the motion of the robot before touching the obstacle. The target distance is the distance of progress.
Optionally, the reverse movement comprises reverse rotation and reverse. The robot is controlled to reversely rotate and retreat, so that the robot avoids the obstacle through retreating, and the robot moves in the reverse movement direction. The robot is controlled to execute target behaviors, and the robot rotates reversely and retreats, so that the robot can avoid a short distance and the situation of touching an obstacle is relieved; then, the robot advances a target distance, so that the movement of the robot is prevented from being blocked by the obstacle during the course of the robot traveling along the obstacle.
Step S20, in the process of executing the target behavior, if the lidar detects that the obstacle is touched again, repeating the target behavior.
And the laser radar touches the obstacle again in the process that the robot moves the target in the reverse direction for a long time and then moves the target distance. Therefore, when the laser radar touches the obstacle, the target behavior can be repeatedly executed, that is, the target behavior is executed again, and as shown in fig. 2A, the robot still re-detects whether the obstacle is touched or not in the process of executing the target behavior again, and further executes the target behavior again when the obstacle is touched. The circulation is such that the robot can gradually avoid the obstacle.
In this embodiment, when the laser radar touches the obstacle, the robot is controlled to execute the target behavior, if the laser radar touches the obstacle again in the process of executing the target behavior, the target behavior is repeatedly executed, the target behavior includes a target length of reverse movement of the robot and then a target distance is advanced, wherein, since the reverse movement is a movement opposite to a movement of the robot before the obstacle is collided, the robot is allowed to move by avoiding the obstacle by reversing the target length of movement of the robot and then the target distance is advanced, and when the laser radar touches the obstacle again, the target behavior is repeatedly executed, so that the robot can gradually avoid the obstacle by repeating the reverse movement when the robot is repeatedly collided by the obstacle. In the process of repeatedly executing the target behavior, the robot repeatedly approaches and touches the obstacle, and in the process of repeatedly approaching the obstacle, the robot can clean the obstacle area, so that the movement of the robot can be prevented from being blocked by the obstacle, and the cleaning missing condition of the area where the obstacle is can be reduced.
Referring to fig. 3, a second embodiment of the present invention provides a control method for a robot, based on the first embodiment shown in fig. 2, the step S10 includes:
step S11, when the laser radar touches an obstacle, controlling the robot to execute a target behavior, where the target behavior includes a target length of time for the robot to move reversely, and then a target distance is advanced, where the target length of time includes a first length of time and a second length of time, and the target length of time for the robot to move reversely includes:
the robot rotates the first time period in reverse and retreats the second time period,
or the robot retreats for the second time period and reversely rotates for the first time period,
or the robot reversely rotates the first time period while retreating the second time period.
Wherein the reverse movement includes reverse rotation and reverse. The target time length comprises a first time length and a second time length, wherein the first time length is the time length of reverse rotation of the robot, the second time length is the time length of backward movement of the robot, and the first time length can be the same as or different from the second time length.
The reverse rotation means rotation in a rotation direction opposite to the reference direction with the rotation direction at the time of collision or before collision as the reference direction. The backward movement means moves backward with reference to the forward direction at the time of collision or before collision.
If the movement during or before the collision does not include forward movement, i.e. the robot may be in a purely rotational movement, then the backward finger moves backward with the direction in which the obstacle is located as the front.
Alternatively, the reverse rotation may be performed according to the touch position at the time of touch, that is, the rotation may be performed in the opposite direction to the touch position.
The whole process of the robot executing the target behavior is as follows: reverse rotation and backward movement are performed, and then forward movement is performed.
The robot may reverse the rotation for a first period of time and then reverse for a second period of time. Specifically, when the robot detects that the laser radar touches an obstacle, the robot reversely rotates for a first time period, after the rotation is finished, the motion state of the robot during touch is judged, if the motion state of the robot during touch is rotary forward, the robot is retreated for a second time period according to the direction opposite to the forward direction, if the motion state of the robot during touch is pure rotation and does not advance, the robot is retreated for the second time period by taking the obstacle as the forward direction.
The robot may be retracted for a second period of time and then rotated in reverse for a first period of time. Specifically, when the laser radar touches an obstacle, the robot firstly judges whether the robot is retreating when touching, if the robot is retreating when touching, the robot is ended, if the robot is not retreating when touching, the robot judges whether the robot is purely rotated and does not advance when touching, if the robot is purely rotated and does not advance when touching, the robot is controlled to retreat for a second period of time by taking the obstacle as the front, if the robot is in a rotating advancing state when touching, the robot is retreated for a second period of time according to the advancing direction when touching as a reference, and the robot is controlled to rotate for a first period of time after the second period of time.
The robot may reverse the rotation for the first period of time while backing for the second period of time. Specifically, when the robot detects that the laser radar is touched, the robot reversely rotates for a first time period in the process of backing the second time period, namely, a superposition part exists between the reverse rotation time and the backing time. The robot may first detect whether the robot is rotating and does not advance, and if the robot is rotating, the robot may reverse for a second period of time with the direction in which the obstacle is located as the front, and if the robot is rotating, the robot may reverse in the reverse direction.
In the process of executing the target behavior, if the laser radar touches the obstacle again, the target behavior is repeatedly executed, for example: when the robot detects that the laser radar touches the obstacle, the robot is retreated for a second time period, reversely rotates for a first time period, forwards and backwards, detects that the laser radar touches the obstacle again, retreats for the second time period again, reversely rotates for the first time period again, and repeats the steps until the repeated termination condition is reached.
Optionally, the termination condition for the robot to end repeating the execution of the target behavior includes, but is not limited to: the robot no longer detects that the lidar touches the obstacle; or the number of touches reaches the termination number.
Alternatively, the order of execution between reverse rotation and backward rotation may be the same as before or may be different when the robot repeatedly executes the target behavior. Such as: when the robot detects that the laser radar touches an obstacle, the robot reversely rotates for a first time period, then retreats for a second time period, then advances a target distance, if the robot detects that the laser radar touches the obstacle again, the robot reversely rotates for the first time period, then advances, wherein in the process of repeatedly executing target behaviors, the sequence of the retreating for the second time period and the rotating for the first time period is different from that of the previous time.
Optionally, before controlling the robot to move reversely, the method further comprises: determining a historical touch position of the robot according to the historical motion information of the robot before touching the obstacle; determining a target rotation direction for reverse rotation according to the historical touch position; controlling the robot to reversely rotate for a first time period according to the target rotation direction, wherein the method comprises the following steps: and controlling the robot to reversely rotate for a first time period according to the historical touch azimuth.
The historical motion information is motion information of the robot before touch. The historical motion information may include at least one of a rotational direction, a moving direction, and an angular velocity, a linear velocity of the lidar before touching the obstacle. It is understood that "before touching the obstacle" may include the moment of touching the obstacle. The historical touch location is the touch location when the lidar touches the obstacle.
Alternatively, the touch direction may be obtained by dividing according to the forward direction and the backward direction of the robot. For example, the touch positions include left front, left rear, right front, and right rear of the robot. The "front" and "rear" are not limited, and for example, a side of the robot close to the laser radar may be referred to as "rear" and a side of the robot remote from the laser radar may be referred to as "front". Alternatively, the touch position may be set differently from this. The touch location may include a particular location area.
Referring to fig. 8 to 13, fig. 8 to 13 show schematic views of touch positions in a specific scenario, wherein fig. 8 is a schematic view of a front middle touch when the laser radar 101 of the robot 100 touches the obstacle 102, fig. 9 is a schematic view of a front left touch when the laser radar 101 of the robot 100 touches the obstacle 102, fig. 10 is a schematic view of a front right touch when the laser radar 101 of the robot 100 touches the obstacle 102, fig. 11 is a schematic view of a rear middle touch when the laser radar 101 of the robot 100 touches the obstacle 102, fig. 12 is a schematic view of a rear left touch when the laser radar 101 of the robot 100 touches the obstacle 102, and fig. 13 is a schematic view of a rear right touch when the laser radar 101 of the robot 100 touches the obstacle 102.
When the historical touch azimuth of the robot is determined according to the historical motion information of the robot before touching the obstacle, the method is as follows:
if the historical motion information is: the robot rotates to the left side to advance before touching the obstacle, and the angular velocity is larger than the first preset angular velocity, and the historical touch position is determined to be the left front of the robot; the first preset angular velocity is the angular velocity corresponding to the left-side rotation and forward movement.
If the historical motion information is: the robot rotates to the left side to retreat when the obstacle is touched, and the angular velocity is larger than a second preset angular velocity, the touch position is determined to be the right rear of the robot, and the second preset angular velocity is the angular velocity corresponding to the left side rotation retreating;
if the historical motion information is: when the obstacle is touched, the robot rotates and advances to the right, and the angular velocity is larger than a third preset angular velocity, the touch position is determined to be the right front of the robot, and the third preset angular velocity is the angular velocity corresponding to the right rotation and advance;
if the historical motion information is: when the robot rotates to the right side to retreat when the obstacle is touched, and the angular velocity is larger than a fourth preset angular velocity, the touch position is determined to be the left rear side of the robot, and the fourth preset angular velocity is the angular velocity corresponding to the right side rotation to retreat.
If the historical motion information is: and when the angular velocity of the obstacle is smaller than the fifth preset angular velocity, judging the touch position as the middle of the robot.
The first preset angular velocity, the second preset angular velocity, the third preset angular velocity, the fourth preset angular velocity, and the fifth preset angular velocity may be the same or different.
After the historical touch azimuth is determined, a target rotation direction is determined according to the historical touch azimuth, and the robot is controlled to rotate according to the target rotation direction. The target rotation direction is the direction of reverse rotation.
In the process of determining the target rotation direction according to the historical touch position, the target rotation direction can be determined according to the direction avoiding the historical collision position. For example, if the history touch azimuth is left front, the direction of the right front rotation is the target rotation direction, i.e., clockwise; if the historical touch direction is left rear, the direction rotating right rear is used as the target rotating direction, namely anticlockwise; if the historical touch direction is the right front direction, the left front direction is taken as the target rotation direction, namely the anticlockwise direction; if the history touch position is right-rear, the direction of left-rear rotation is the target rotation direction, i.e., clockwise.
After the target rotation direction is determined, the reverse rotation is performed in accordance with the target rotation direction, and the magnitude of the reverse rotation is controlled by the first time period of the reverse rotation. The larger the first time length is, the larger the amplitude of the reverse rotation is, and the more likely the obstacle is successfully avoided in the process of moving forward after the reverse rotation is. However, in a practical application scenario, the robot may need to move to a specific destination position, and if the magnitude of the reverse rotation is too large, the reverse rotation may be caused to occur for a first period of time and a second period of time, and after the robot advances a target distance, the robot deviates from the destination position that needs to be moved too much. Based on this principle, the value of the first time period may be preset according to the actual situation.
Optionally, the determining the historical touch location of the robot according to the historical motion information of the robot when the obstacle is touched includes:
if the historical motion information is: when an obstacle is touched, the robot rotates to the left and advances, and the angular speed is larger than a first preset angular speed, and the historical touch direction is determined to be the left front of the robot;
if the historical motion information is: when an obstacle is touched, the robot rotates to the left and retreats, and the angular velocity is larger than a second preset angular velocity, and the historical touch position is determined to be the right rear of the robot;
If the historical motion information is: when an obstacle is touched, the robot rotates and advances to the right, and the angular speed is larger than a third preset angular speed, and the historical touch position is determined to be the right front of the robot;
if the historical motion information is: and when the obstacle is touched, the robot rotates to the right side to retreat, and the angular velocity is larger than a fourth preset angular velocity, and the historical touch position is determined to be the left rear of the robot.
Further, a direction of the reverse rotation may be determined based on the historical touch location such that the robot is rotated in a reverse direction for a first period of time based on the direction of the reverse rotation.
And determining the historical touch position of the robot according to the historical motion information, and reversely rotating for a first time period according to the historical touch position.
Optionally, controlling the robot to reversely rotate for a first period of time includes: if the robot moves along the navigation path, determining a target rotation direction according to the path direction of the navigation path, and controlling the robot to rotate for a first duration according to the target rotation direction.
In the process of reversely rotating the first duration, specifically, the robot is controlled to reversely rotate the first duration according to the target rotation direction, and the robot can travel to a specific area according to the navigation path to execute a specific task, so that the target rotation direction should be matched with the path direction, for example, the path direction is left front, the target rotation direction should be left front rotation direction, and the target rotation direction is anticlockwise if the touch direction is right front.
In a specific scenario, in a process of moving the robot according to the navigation path, a situation may occur that the path direction is close to the touch position, for example, the touch position is in a left front area, and the path direction is also in a left front area, and at this time, when determining the target rotation direction matching with the path direction, the direction of rotation to the other side of the reference line, which deviates from the touch position, may be taken as the target rotation direction based on the path direction as the reference line.
In this embodiment, the robot reversely rotates the first duration and then retreats the second duration, or reversely rotates the first duration by retreating the second duration, or reversely rotates the first duration while retreating the second duration, so that the robot can move in a movement trend of avoiding the obstacle in the forward process after reversely rotating the first duration and retreating the second duration; in addition, the historical touch position of the robot is determined according to the historical motion information of the robot before the obstacle is touched, and the robot is controlled to reversely rotate for a first time period according to the historical touch position, so that the obstacle can be avoided more accurately based on the historical touch position; the target rotation direction is determined according to the path direction of the navigation path, so that the robot can avoid the obstacle and simultaneously travel according to the navigation path in the scene of the navigation mode, and the method is suitable for executing the scene of moving according to the navigation mode.
Referring to fig. 4, a third embodiment of the present invention provides a control method for a robot, based on the first embodiment shown in fig. 2, the step S10 includes:
step S12, when the laser radar touches an obstacle, determining the average angular velocity and/or the average linear velocity in a preset time period before the laser radar touches the obstacle;
the preset time period is a time period preset for calculating the average angular velocity and/or the average linear velocity. The average linear velocity is the average of all the acquired linear velocity data values over a preset period of time. The average angular velocity is the average of all the acquired angular velocity data values over a preset period of time. The average angular velocity and the average linear velocity are vectors and have directions.
Optionally, the preset time period is a time formed by a time when the laser radar touches the obstacle and a time before the time.
And step S13, controlling the robot to reversely move the target duration according to the average angular speed and/or the average linear speed, and then advancing the target distance.
The target duration includes a first duration of reverse rotation and a second duration of reverse rotation.
The second time period is preset in the robot. Under the condition that the first duration is unchanged, the larger the second duration is, the larger the distance of the robot retreating after retreating for the second duration is, the farther the robot is deviated from the position of the previous touch obstacle after moving forward for the target distance at the moment, and therefore the obstacle is more likely to be avoided.
Specifically, during the reverse movement, the robot rotates in reverse for a first period of time and, before, after or simultaneously therewith, retreats in accordance with the reverse direction of the average angular velocity for a second period of time;
or, during the reverse movement, the robot rotates in reverse for a first period of time and, before, after or simultaneously therewith, retreats in a reverse direction according to the average linear velocity for a second period of time;
alternatively, during the reverse movement, the robot rotates in reverse for a first period of time and, before, after or simultaneously therewith, retreats for a second period of time in accordance with the reverse direction of the average linear velocity and the reverse direction of the average angular velocity.
In the process of executing the target behavior, if the laser radar touches the obstacle again, in order to repeatedly execute the target behavior, the average angular velocity and/or the average linear velocity need to be re-acquired, and the robot is controlled to retreat according to the re-acquired average angular velocity and/or average linear velocity.
If a trend of backward movement is not included in the opposite direction of the average linear velocity and/or the average angular velocity, such as a case where the robot rotates in place, the robot is controlled to perform backward movement at this time.
In the embodiment, the average angular velocity and/or the average linear velocity in a preset time period before the laser radar touches the obstacle are determined; the linear speed and the angular speed of the robot can influence the reverse motion path of the robot in the rotating process, and the corresponding reverse motion paths are different, so that the reverse motion target duration of the robot is controlled according to the average angular speed and/or the average linear speed, the reverse motion path of the robot is consistent with the advancing path in the process of approaching an obstacle before collision, and the robot is prevented from colliding with the obstacle in the reverse process.
Referring to fig. 5, a fourth embodiment of the present invention provides a control method of a robot, based on the first embodiment shown in fig. 2, the method further includes:
step S30, recording the touch times of the laser radar to the obstacle;
the number of touches is the number of times the same obstacle is touched in the process of repeatedly executing the target behavior.
Because the special appearance of robot and laser radar are higher than the characteristic of robot upper surface, in the in-process of repeatedly carrying out target action, probably appear touching the condition that the barrier got into the endless loop, laser radar repeated touching barrier this moment, led to the robot unable to successfully avoid the barrier.
In a scenario, referring to fig. 14, the robot 100 includes a laser radar 101, and when the robot 100 travels along a first travel route 103, the laser radar 101 touches an obstacle 102, and at this time, the robot 100 travels in a reverse direction of the first travel route 103, that is, in a second travel route 104, and thereafter the laser radar 101 touches the obstacle 102 again, and thereafter the robot 100 repeatedly travels in the first travel route 103, and thus the robot 100 cannot exit from the dead cycle of touching.
Step S40, if the number of touches is greater than or equal to a preset number, ending executing the target behavior after executing the reverse motion.
The preset number of times is a preset number of touches for instructing termination of execution of the target behavior.
In order to avoid the problem that the laser radar repeatedly touches the obstacle, so that the robot cannot avoid the obstacle successfully all the time, the embodiment also finishes executing the target behavior after executing the current reverse motion when the number of touches is greater than the preset number, that is, does not execute the step of advancing the target distance.
Optionally, when the number of touches is greater than or equal to the preset number of times, in the process of reversing the movement target duration, reversing is performed according to the reverse direction of the average linear velocity in the preset time period before touching and the same direction of the average angular velocity in the preset time period before touching, so that the obstacle can be avoided.
In a scenario, in order to enable the robot to break the dead cycle of the lidar and the obstacle entering the touch, referring to fig. 15, the robot 100 travels according to the first travel route 103, after which the lidar 101 touches the obstacle 102, and when the number of touches is greater than the preset number, the robot 100 travels according to the third travel route 105, that is, retreats in the opposite direction of the average linear velocity in the preset time period before the touch and in the same direction of the average angular velocity in the preset time period before the touch, so that the dead cycle of the lidar 101 and the obstacle 102 touching can be broken.
In the embodiment, the number of times of detecting the laser radar touching the obstacle is recorded; if the number of touches is greater than or equal to the preset number, the execution of the target behavior is ended after the reverse movement is executed, so that the entry into the cycle of continuously touching the same obstacle can be avoided, and the obstacle can be avoided.
Referring to fig. 6, a fifth embodiment of the present invention provides a control method of a robot, based on the first embodiment shown in fig. 2, the method further includes:
step S50, acquiring a touch azimuth when the laser radar touches the obstacle, and acquiring target motion information of the robot;
in order to facilitate the movement of the following robot in a manner of avoiding the obstacle in the movement process, in this embodiment, the outline of the obstacle is determined according to the touch azimuth and the target movement information when the laser radar touches the obstacle, and the obstacle map is drawn according to the outline of the obstacle, so that the following robot can avoid the movement of the obstacle according to the obstacle map in the movement process.
When the target motion information is that the laser radar touches an obstacle, the robot motion information. The target motion information includes a motion speed of the robot when the lidar touches the obstacle.
Step S60, determining an obstacle outline according to the touch azimuth and the target motion information;
every time the laser radar touches an obstacle, the touch position is recorded. And determining the position of the obstacle through the touch azimuth recorded for a plurality of times and the movement speed of the robot. The movement speed of the robot is used for calculating the moving distance of the robot, the touch position is used for determining the position of the obstacle, and therefore the position of the obstacle can be calculated according to the touch positions and the movement speed.
In an embodiment, the determining the obstacle profile according to the touch location and the target motion information includes:
determining a target coordinate position of the obstacle point in a robot coordinate system according to the touch azimuth and the movement speed;
the obstacle points are coordinate points of the obstacle obtained according to the abstraction. The target coordinate position is a coordinate position of the obstacle in the robot coordinate system.
Converting the target coordinate position into a world coordinate system to obtain an actual coordinate position of the obstacle point;
the actual coordinate position locates the coordinate position of the obstacle point in the world coordinate system.
And determining the outline of the obstacle according to a plurality of actual coordinate positions when the obstacle is touched for a plurality of times.
Each time the obstacle is touched, the actual coordinate position of the touch can be determined according to the touch position and the movement speed of the touch, and in order to determine the outline of the obstacle, more than two times of obstacle touching can be adopted to determine more than two actual coordinate positions. The more actual coordinate positions, the more accurate the obstacle profile.
And step S70, drawing an obstacle map according to the outline of the obstacle.
After the obstacle outline is obtained, the obstacle outline is drawn on an obstacle map.
In the embodiment, the touch azimuth of the laser radar when touching the obstacle is acquired, and the target motion information of the robot is acquired; determining the outline of the obstacle according to the touch azimuth and the target motion information; and drawing an obstacle map according to the outline of the obstacle, so that the robot can avoid the obstacle to move according to the obstacle map in the moving process of the subsequent robot, and the collision with the obstacle during the subsequent movement along the obstacle is reduced or avoided.
Referring to fig. 7, a further embodiment of the present invention proposes a control device of a robot, comprising a behavior execution module 10 and a re-detection module 20, wherein:
the behavior execution module 10 is configured to control the robot to execute a target behavior when detecting that the laser radar touches an obstacle, where the target behavior includes a target length of time for the robot to move reversely, and then advance a target distance;
The re-detection module 20 is configured to repeatedly perform the target behavior if the lidar detects that the lidar touches the obstacle again during the performance of the target behavior; wherein the reverse motion is a motion opposite to a motion of the robot before touching the obstacle.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) as described above, comprising instructions for causing a robot to perform the method according to the embodiments of the present invention.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (10)

1. The control method of the robot is characterized in that the robot comprises a laser radar, the laser radar is convexly arranged at the top of the robot, and a cover body is arranged on the outer side of the laser radar, and the method comprises the following steps:
when the laser radar touches an obstacle, controlling the robot to execute target behaviors, wherein the target behaviors comprise target time length of reverse movement of the robot, and then advancing a target distance;
in the process of executing the target behavior, if the laser radar is detected to touch the obstacle again, repeating the target behavior;
wherein the reverse motion is a motion opposite to a motion of the robot before touching the obstacle;
the robot reverse motion target duration includes:
determining an average angular velocity and/or an average linear velocity in a preset time period before the laser radar touches the obstacle;
And reversely moving the target duration according to the average angular speed and/or the average linear speed.
2. The method of claim 1, wherein the target duration comprises a first duration and a second duration, the robot reversing the movement of the target duration comprising:
the robot rotates the first time period in reverse and retreats the second time period,
or the robot retreats for the second time period and reversely rotates for the first time period,
or the robot reversely rotates the first time period while retreating the second time period.
3. The method of claim 2, wherein prior to the robot reversing the movement target duration, the method further comprises:
determining a historical touch position of the robot according to the historical motion information of the robot before touching the obstacle;
determining a target rotation direction for reverse rotation according to the historical touch position;
the robot counter-rotating the first duration includes:
and reversely rotating the first time period according to the target rotation direction.
4. The method of claim 2, wherein the robot counter-rotates the first duration of time, comprising:
If the robot moves along the navigation path, determining a target rotation direction according to the path direction;
and reversely rotating the first time period according to the target rotation direction.
5. The method of claim 1, wherein the method further comprises:
recording the touch times of detecting that the laser radar touches the obstacle;
and if the touch times are greater than or equal to the preset times, ending executing the target behavior after executing the reverse movement.
6. The method of claim 1, wherein the method further comprises:
acquiring a touch azimuth when the laser radar touches the obstacle, and acquiring target motion information of a robot;
determining an obstacle profile according to the touch azimuth and the target motion information;
and drawing an obstacle map according to the outline of the obstacle.
7. The method of claim 6, wherein the target motion information includes a motion speed of the robot, determining an obstacle profile from the touch location and the target motion information comprises:
determining a target coordinate position of the obstacle point in a robot coordinate system according to the touch azimuth and the movement speed;
Converting the target coordinate position into a world coordinate system to obtain an actual coordinate position of the obstacle point;
and determining the outline of the obstacle according to a plurality of actual coordinate positions when the obstacle is touched for a plurality of times.
8. A control device of a robot, characterized in that the control device of a robot comprises a behavior execution module and a re-detection module, wherein:
the behavior execution module is used for controlling the robot to execute target behaviors when detecting that the laser radar touches an obstacle, wherein the target behaviors comprise the target time length of reverse movement of the robot, and then the robot advances a target distance;
the re-detection module is used for repeatedly executing the target behavior if the laser radar is re-detected to touch the obstacle in the process of executing the target behavior; wherein the reverse motion is a motion opposite to a motion of the robot before touching the obstacle;
the robot reverse motion target duration includes:
determining an average angular velocity and/or an average linear velocity in a preset time period before the laser radar touches the obstacle;
and reversely moving the target duration according to the average angular speed and/or the average linear speed.
9. A robot comprising a memory, a processor and a control program of the robot stored on the memory and executable on the processor, the control program of the robot, when executed by the processor, implementing the steps of the control method of the robot according to any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that a control program of a robot is stored on the computer-readable storage medium, which when executed by a processor, implements the steps of the control method of a robot according to any one of claims 1 to 7.
CN202111169113.0A 2021-09-30 2021-09-30 Robot control method and device, robot and readable storage medium Active CN114019951B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111169113.0A CN114019951B (en) 2021-09-30 2021-09-30 Robot control method and device, robot and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111169113.0A CN114019951B (en) 2021-09-30 2021-09-30 Robot control method and device, robot and readable storage medium

Publications (2)

Publication Number Publication Date
CN114019951A CN114019951A (en) 2022-02-08
CN114019951B true CN114019951B (en) 2023-08-08

Family

ID=80055384

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111169113.0A Active CN114019951B (en) 2021-09-30 2021-09-30 Robot control method and device, robot and readable storage medium

Country Status (1)

Country Link
CN (1) CN114019951B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114779784A (en) * 2022-04-28 2022-07-22 格力博(江苏)股份有限公司 Control method for robot tool and robot tool
CN115137267B (en) * 2022-07-13 2024-03-26 浙江欣奕华智能科技有限公司 Obstacle avoidance walking method and device of cleaning robot, electronic equipment and medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107544524A (en) * 2017-10-30 2018-01-05 北京奇虎科技有限公司 Collision processing method, device and the robot of robot
CN108181904A (en) * 2017-12-29 2018-06-19 深圳市艾特智能科技有限公司 Obstacle Avoidance, system, readable storage medium storing program for executing and robot
CN208342844U (en) * 2018-06-21 2019-01-08 深圳市杉川机器人有限公司 A kind of triggering avoiding mechanism and robot
CN111240310A (en) * 2018-11-13 2020-06-05 北京奇虎科技有限公司 Robot obstacle avoidance processing method and device and electronic equipment
CN111930106A (en) * 2019-04-28 2020-11-13 广东宝乐机器人股份有限公司 Mobile robot and control method thereof
CN112180945A (en) * 2020-10-22 2021-01-05 南京苏美达智能技术有限公司 Method for automatically generating barrier boundary and automatic walking equipment
CN213226284U (en) * 2020-07-17 2021-05-18 深圳市杉川机器人有限公司 Radar device and mobile robot

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109878515B (en) * 2019-03-12 2021-03-16 百度在线网络技术(北京)有限公司 Method, device, storage medium and terminal equipment for predicting vehicle track

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107544524A (en) * 2017-10-30 2018-01-05 北京奇虎科技有限公司 Collision processing method, device and the robot of robot
CN108181904A (en) * 2017-12-29 2018-06-19 深圳市艾特智能科技有限公司 Obstacle Avoidance, system, readable storage medium storing program for executing and robot
CN208342844U (en) * 2018-06-21 2019-01-08 深圳市杉川机器人有限公司 A kind of triggering avoiding mechanism and robot
CN111240310A (en) * 2018-11-13 2020-06-05 北京奇虎科技有限公司 Robot obstacle avoidance processing method and device and electronic equipment
CN111930106A (en) * 2019-04-28 2020-11-13 广东宝乐机器人股份有限公司 Mobile robot and control method thereof
CN213226284U (en) * 2020-07-17 2021-05-18 深圳市杉川机器人有限公司 Radar device and mobile robot
CN112180945A (en) * 2020-10-22 2021-01-05 南京苏美达智能技术有限公司 Method for automatically generating barrier boundary and automatic walking equipment

Also Published As

Publication number Publication date
CN114019951A (en) 2022-02-08

Similar Documents

Publication Publication Date Title
CN114019951B (en) Robot control method and device, robot and readable storage medium
CN110543168B (en) Walking method of self-moving robot and walking method of sweeping robot
CN107041718B (en) Cleaning robot and control method thereof
EP3082543B1 (en) Autonomous mobile robot
CN112327878B (en) Obstacle classification and obstacle avoidance control method based on TOF camera
US20220061616A1 (en) Cleaning robot and control method thereof
CN109997089A (en) Floor treatment machine and floor treatment method
EP2261762A2 (en) Robot cleaner and control method thereof
CN110850885A (en) Autonomous robot
CN112806912B (en) Robot cleaning control method and device and robot
CN111358371B (en) Robot escaping method and robot
CN211559963U (en) Autonomous robot
CN106647761A (en) Self-moving sweeper and control method thereof
CN108762247A (en) Obstacle avoidance control method for self-moving equipment and self-moving equipment
CN112423639B (en) Autonomous walking type dust collector
CN110088701B (en) Operating method for a self-propelled cleaning device and such a cleaning device
CN112423640A (en) Autonomous walking type dust collector
CN113693505B (en) Obstacle avoidance method and device for sweeping robot and storage medium
CN112308033B (en) Obstacle collision warning method based on depth data and visual chip
CN114326711A (en) Narrow passage passing method, device, robot and computer readable storage medium
CN113974507A (en) Carpet detection method and device for cleaning robot, cleaning robot and medium
CN114652217B (en) Control method, cleaning robot, and storage medium
CN115500737A (en) Ground medium detection method and device and cleaning equipment
CN113465592A (en) Navigation method and self-walking device
CN114115241B (en) Obstacle detection method, obstacle-based navigation device and robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant