CN211559963U - Autonomous robot - Google Patents

Autonomous robot Download PDF

Info

Publication number
CN211559963U
CN211559963U CN201922340482.6U CN201922340482U CN211559963U CN 211559963 U CN211559963 U CN 211559963U CN 201922340482 U CN201922340482 U CN 201922340482U CN 211559963 U CN211559963 U CN 211559963U
Authority
CN
China
Prior art keywords
robot
obstacle
main body
robot main
sensor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201922340482.6U
Other languages
Chinese (zh)
Inventor
杨勇
吴泽晓
郑志帆
罗治佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen 3irobotix Co Ltd
Original Assignee
Shenzhen 3irobotix Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen 3irobotix Co Ltd filed Critical Shenzhen 3irobotix Co Ltd
Priority to CN201922340482.6U priority Critical patent/CN211559963U/en
Application granted granted Critical
Publication of CN211559963U publication Critical patent/CN211559963U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The utility model discloses an autonomous robot, which comprises a robot main body, a depth sensor and a control module, wherein the depth sensor is arranged on the robot main body and used for identifying obstacles on the advancing route of the robot main body and scanning the outline of the obstacles; the control module is arranged on the robot main body and is electrically connected with the depth sensor; the control module is used for controlling the robot main body to execute obstacle avoidance action based on the detection signal of the depth sensor. The utility model discloses technical scheme's autonomic robot has the advantage that keeps away the barrier precision height.

Description

Autonomous robot
Technical Field
The utility model relates to a robotechnology field, in particular to autonomic robot.
Background
The existing sweeping robot adopts an infrared sensor to avoid obstacles, but the infrared sensor is greatly influenced by the surface color of the obstacles. For example, when the surface of an obstacle is black, the black is not easy to reflect, and infrared rays are hardly or not reflected, so that the distance measurement is disabled, and the obstacle avoidance of the sweeping robot is affected. Namely, the current sweeping robot has the problem of low obstacle avoidance success rate.
SUMMERY OF THE UTILITY MODEL
The utility model aims at providing an autonomic robot, aim at solving the current robot of sweeping the floor and keep away the problem that the barrier success rate is low.
In order to achieve the above object, the utility model provides an autonomous robot, include:
a robot main body;
the depth sensor is arranged on the robot main body and used for identifying obstacles on the advancing route of the robot main body and scanning the outlines of the obstacles;
the control module is arranged on the robot main body and is electrically connected with the depth sensor;
the control module is used for controlling the robot main body to execute obstacle avoidance action based on the detection signal of the depth sensor.
Optionally, the depth sensor has a preset obstacle height detection interval, an upper limit height of the preset obstacle height detection interval is equal to the installation height of the depth sensor plus a first preset distance, and a lower limit height of the preset obstacle height detection interval is equal to the installation height of the depth sensor minus a second preset distance.
Optionally, the first preset distance and/or the second preset distance is 90 mm.
Optionally, the obstacle crossing height of the robot main body is higher than a lower limit height of the preset obstacle height detection interval, and the control module is further configured to control the robot main body to continue to perform a forward movement when the obstacle height is not higher than the obstacle crossing height of the robot main body.
Optionally, the autonomous robot further includes a camera module, and the camera module is configured to acquire image data on a forward route of the robot main body;
the control module is further electrically connected with the camera module and used for controlling the robot main body to execute obstacle avoidance action when the robot main body is close to a preset obstacle based on the detection signal of the depth sensor when the obstacle on the advancing route of the robot main body is determined to be the preset obstacle based on the detection signal of the camera module, and the preset obstacle is not higher than the obstacle crossing height.
Optionally, the depth sensor is further configured to scan a profile of a junction between the recessed environment and the traveling road surface when it is detected that a recessed environment occurs on the traveling road surface of the robot main body and a height difference between the recessed environment and the traveling road surface is greater than the obstacle crossing height.
Optionally, the autonomous robot further includes an edge sensor, and the edge sensor is disposed on a side of the robot main body;
the control module is further used for determining a rotation angle of the robot main body according to a detection signal of the depth sensor when an obstacle avoidance action is executed, and controlling the robot main body to rotate according to the rotation angle, so that the edge sensor faces the virtual obstacle, and the robot main body walks around the periphery of the obstacle through the edge traditioner. Optionally, the depth sensor is disposed at the front side of the robot body, or the depth sensor is disposed near the top of the robot body at the upper edge of the front side of the robot body. Optionally, a plurality of depth sensors are arranged on the robot main body at intervals.
Optionally, the depth sensor is a TOF sensor, a 3D structured light sensor, a binocular sensor, a lidar sensor, or a millimeter wave sensor.
Optionally, the autonomous robot is a sweeping robot.
The utility model discloses technical scheme has obtained the profile of barrier through depth sensor for control module can avoid the virtual barrier who founds according to the barrier profile in the motion map of built-in, and thereby make the robot main part accurately avoid the barrier on the route of actually marcing, has greatly promoted the obstacle-avoiding precision of autonomic robot.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without creative efforts.
Fig. 1 is a block diagram of an autonomous robot according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of the embodiment shown in fig. 1.
The reference numbers illustrate:
reference numerals Name (R) Reference numerals Name (R)
10 Robot main body 20 Depth sensor
30 Control module 40 Camera module
50 Edge sensor
The objects, features and advantages of the present invention will be further described with reference to the accompanying drawings.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only some embodiments of the present invention, not all embodiments. Based on the embodiments in the present invention, all other embodiments obtained by a person skilled in the art without creative efforts belong to the protection scope of the present invention.
It should be noted that, if directional indications (such as upper, lower, left, right, front and rear … …) are involved in the embodiment of the present invention, the directional indications are only used to explain the relative position relationship between the components, the motion situation, etc. in a specific posture (as shown in the drawings), and if the specific posture is changed, the directional indications are changed accordingly.
In addition, if there is a description relating to "first", "second", etc. in the embodiments of the present invention, the description of "first", "second", etc. is for descriptive purposes only and is not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In addition, the meaning of "and/or" appearing throughout is to include three juxtapositions, exemplified by "A and/or B" including either scheme A, or scheme B, or a scheme in which both A and B are satisfied. In addition, the technical solutions in the embodiments may be combined with each other, but it must be based on the realization of those skilled in the art, and when the technical solutions are contradictory or cannot be realized, the combination of the technical solutions should not be considered to exist, and is not within the protection scope of the present invention.
The utility model provides an autonomic robot. In this embodiment, the autonomous robot is a sweeping robot that can navigate across a floor surface. In some examples, an autonomous robot may clean a surface while traversing the surface. The robot may remove debris from the surface by: the debris is fanned and/or collected by applying a negative pressure (e.g., a partial vacuum) over the surface. To perform the locomotion function of the autonomous robot, the robot includes a robot body 10 supported by a drive system that can drive the steering robot across a floor surface. The robot main body 10 has a forward portion and a backward portion. The robot body 10 may be circular in shape, but may be other shapes including, but not limited to, a square shape, a rectangular shape, or a shape in which a forward portion is square and a backward portion is circular, or a shape in which a forward portion is circular and a backward portion is square. The drive system includes a right drive wheel module and a left drive wheel module. The driving wheel module comprises a driving wheel and a driving motor for driving the corresponding wheel.
In the embodiment of the present invention, as shown in fig. 1 and 2, the autonomous robot includes a robot main body 10, a depth sensor 20, and a control module 30. Wherein, the depth sensor 20 is arranged on the robot main body 10, and is used for identifying an obstacle on the advancing route of the robot main body 10 and scanning the outline of the obstacle; the control module 30 is disposed on the robot body 10 and electrically connected to the depth sensor 20, and the control module 30 is configured to control the robot body 10 to perform an obstacle avoidance operation based on a detection signal of the depth sensor 20.
The control module 30 controls the robot main body 10 to execute the obstacle avoidance action based on the detection signal of the depth sensor 20, specifically: the control module 30 constructs a motion map of the robot main body 10 based on the detection signal of the depth sensor 20; meanwhile, the control module 30 acquires the contour, and constructs a virtual obstacle in the moving map by taking the contour as an object; when the robot main body 10 approaches the virtual obstacle in the movement map, the control module 30 controls the robot main body 10 to perform an obstacle avoidance operation.
It is worth explaining that, the existing sweeping robot adopts an infrared sensor to avoid obstacles, but the infrared sensor is greatly influenced by the surface color of the obstacles. For example, when the surface of an obstacle is black, the infrared sensor fails to measure distance due to the fact that the black reflectivity is low and light is hardly reflected, and therefore obstacle avoidance of the sweeping robot is affected, and the problem that the obstacle avoidance success rate of the existing sweeping robot is low is caused.
To the defect that current robot of sweeping the floor exists, the utility model discloses technical scheme is through setting up depth sensor 20 on robot main part 10, through the barrier on the distinguishable robot main part 10 way of advancing of this depth sensor 20. Since the depth sensor 20 is one type of depth sensor, the profile of the obstacle, including but not limited to the width and height data of the obstacle, and the length data of the detectable obstacle, can also be obtained by the depth sensor 20. Meanwhile, the depth sensor 20 may also detect a distance between the obstacle and the robot main body 10, and the control module 30 may construct a motion map of the robot main body 10 using a detection signal of the depth sensor 20. In this way, the control module 30 may acquire the contour of the obstacle acquired by the depth sensor 20 and construct a virtual obstacle in the constructed motion map with the contour of the obstacle as an object. In this way, when the robot main body 10 approaches the virtual obstacle in the moving map, the control module 30 may control the robot main body 10 to perform an obstacle avoidance operation, so that the robot main body 10 accurately avoids the obstacle in the actual scene. Also, after the motion map is constructed, the robot main body 10 can store the motion map of the robot, and at the same time, virtual obstacles constructed in the motion map can be retained. When the autonomous robot encounters the obstacle again in the later movement, the autonomous robot can directly avoid the obstacle without acquiring the outline of the obstacle again, and the obstacle avoiding efficiency of the autonomous robot is improved.
It can be seen that, compared with the existing sweeping robot, the autonomous robot of the present application can obtain the contour of the obstacle through the depth sensor 20, so that the control module 30 can control the robot main body 10 to avoid the virtual obstacle constructed according to the contour of the obstacle in the built-in motion map, and thus the robot main body 10 accurately avoids the obstacle on the actual traveling route, and the obstacle avoidance accuracy of the autonomous robot is greatly improved. If the depth sensor 20 does not find an obstacle at the position where the obstacle originally exists during the movement of the autonomous robot, the control module 30 removes the virtual obstacle in the movement map; if the obstacle detected by the depth sensor 20 at the position where the obstacle originally exists changes during the movement of the autonomous robot, the depth sensor 20 will rescan the new obstacle and obtain the contour of the new obstacle, and the control module 30 will correspondingly update the virtual obstacle in the movement map.
Specifically, in the present embodiment, the depth sensor 20 may be a TOF sensor, a 3D structured light sensor, a binocular sensor, a lidar sensor, a millimeter wave sensor, or the like.
Further, in order to improve the detection accuracy, in the present embodiment, when the depth sensor 20 has a light emitting source (e.g., a TOF sensor, a 3D structured light sensor, a binocular sensor, a lidar sensor, etc.), the emitted light is laser light. Of course, the design of the present application is not limited thereto, and in other embodiments of the present application, the light emitted by the depth sensor 20 may also be infrared light.
Further, the depth sensor 20 has a preset obstacle height detection interval, and the control module 30 is configured to obtain a contour of the obstacle when the height of the obstacle is within the preset obstacle height detection interval, and construct the virtual obstacle in the motion map according to the contour. It is understood that, in the present embodiment, the height section of the obstacle acquired by the depth sensor 20 may be defined by a preset obstacle height detection section. Then, when the low point of the obstacle is higher than the preset upper limit height of the obstacle height detection section or the high point of the obstacle is lower than the preset lower limit height of the obstacle height detection section, the control module 30 does not set a virtual obstacle at the corresponding position of the motion map, and thus the robot main body 10 will continue to perform the forward movement. That is, if the low point of the obstacle is higher than the upper limit height of the preset obstacle height detection section or the high point of the obstacle is lower than the lower limit height of the preset obstacle height detection section, the robot main body 10 continues to perform the forward movement. By the arrangement, the autonomous robot can adapt to cleaning work in different environments.
For example, a case where the low point of the obstacle is higher than the preset upper limit height of the obstacle height detection section is described by taking a table as an example: it is known that the depth sensor 20 can find the table at a distant place during the autonomous robot traveling, that is, an obstacle is found on the traveling route of the robot main body 10, but the table of the table is generally high and the robot main body 10 can smoothly pass under the table. Then, it is apparent that the table portion of the table is not an obstacle avoidance object of the robot main body 10. At this time, the height of the table low point is higher than the upper limit height by adjusting the upper limit height of the preset obstacle height detection interval, so that the control module 30 does not construct virtual obstacles on the moving map, and the robot main body 10 can smoothly pass through the area at the bottom of the table to be cleaned sanitarily. Of course, the table is only an example object, and in an actual application scenario, a chair, a sofa, a bed, and the like may all be obstacle avoidance objects of the robot main body. The height of the obstacle is lower than the lower limit height of the preset obstacle height detection section, and the robot main body 10 can normally pass different types of garbage to clean the garbage by adjusting the lower limit height of the preset obstacle height detection section in consideration of the difference in height of different garbage.
The obstacle crossing height of the robot main body 10 refers to a height of a convex object or a concave environment through which the robot main body 10 can pass back and forth without obstacles. The robot main body 10 is controlled to avoid obstacles which cannot be crossed, so that the cleaning area of the autonomous robot can be increased while the autonomous robot is prevented from colliding with the obstacles.
Specifically, in this embodiment, the upper limit height of the preset obstacle height detection interval is the installation height of the depth sensor 20 plus a first preset distance, and the upper limit height of the preset obstacle height detection interval is the installation height of the depth sensor 20 minus a second preset distance. It is understood that the depth sensor 20 has a certain scanning angle in the vertical direction, and thus, in order to raise the scanning range of the depth sensor 20 as much as possible, the installation position of the depth sensor 20 is correspondingly raised, so that the installation height of the depth sensor 20 is similar to the height of the robot main body 10. In this way, the preset obstacle height detection section is determined by adding the first preset distance to the installation height of the depth sensor 20 or subtracting the second preset distance from the installation height of the depth sensor 20, and the section value of the preset obstacle height detection section can be enlarged as much as possible, thereby enlarging the number of obstacles that pass detection. Of course, the design of the present application is not limited to this, and in other embodiments of the present application, the range of the preset obstacle height detection section may be: the lower limit height is 0, and the upper limit height is 0+ a preset value, where 0 refers to the height of the traveling road surface of the robot main body 10, and the preset value is the height of the obstacle on the traveling road surface of the robot main body 10.
Specifically, in the present embodiment, the first preset distance is 90 mm, and the second preset distance is also 90 mm, that is, the preset obstacle height detection section is plus or minus 90 mm of the installation height of the depth sensor 20. If the installation height of the depth sensor 20 is 90 mm (the ground is used as a reference surface), the maximum height of the obstacle to be detected by the depth sensor 20 is 180 mm, that is, when there is a distance of 180 mm or more between the lower point of the obstacle and the ground, the robot main body 10 continues to perform the forward movement. Generally, the height of the sweeping robot is lower than 18 cm, so that the sweeping robot can be adapted to most sweeping robots. Of course, the design of the present application is not so limited, and in other implementations, depth sensor 20 may be adapted according to changes in actual usage conditions. Correspondingly, the first preset distance and the second preset distance can be adjusted adaptively according to practical application. For example, the first preset distance may also be 30 mm, 40 mm, 50 mm, 60 mm, 70 mm, 80 mm, 100 mm, 110 mm, 120 mm, etc.; the second predetermined distance may also be 30 mm, 40 mm, 50 mm, 60 mm, 70 mm, 80 mm, 100 mm, 110 mm, 120 mm, etc.
Further, in the present embodiment, the obstacle crossing height of the robot main body 10 is higher than the lower limit height of the preset obstacle height detection section, and the control module 30 is further configured to control the robot main body 10 to continue to perform the forward movement when the obstacle height is not higher than the obstacle crossing height of the robot main body 10. It can be understood that the lower limit height of the preset obstacle height detection section is set to be lower than the obstacle crossing height of the robot main body 10, so that when the height of the obstacle is lower than the obstacle crossing height of the robot main body 10, the robot main body 10 continues to advance to cross the obstacle, and then the robot main body 10 can clean the obstacle when crossing the obstacle, so as to ensure the cleaning effect of the autonomous robot. Further, since the height of the obstacle is lower than the obstacle crossing height of the robot main body 10, the movement of the robot main body 10 is not affected even if the obstacle is not a garbage to be cleaned.
In one embodiment, the autonomous robot further comprises a camera module 40, wherein the camera module 40 is used for acquiring image data on the advancing route of the robot main body 10; the control module 30 is further electrically connected to the camera module 40, and the control module 30 is configured to control the robot main body 10 to perform an obstacle avoidance operation when the robot main body 10 is close to a preset obstacle, based on the detection signal of the depth sensor 20, when it is determined that the obstacle on the robot main body advancing path is the preset obstacle based on the detection signal of the camera module 40, where the preset obstacle is not higher than the obstacle crossing height.
When the control module 30 determines that the obstacle on the advancing path of the robot main body is a preset obstacle based on the detection signal of the camera module 40, based on the detection signal of the depth sensor 20, the control module controls the robot main body 10 to perform an obstacle avoidance action when approaching the preset obstacle, specifically: the control module 30 is configured to obtain the image data, compare the image data with an image of a preset obstacle, if the depth sensor 20 detects that the height of the obstacle is lower than the obstacle crossing height, and the preset obstacle appears in the image data, the control module 30 obtains the outline of the preset obstacle from a local area or a cloud end, and constructs a virtual obstacle matched with the preset obstacle in an area where the motion map corresponds to the preset obstacle according to a detection signal of the depth sensor 20.
It will be appreciated that some obstacles may have a height below the obstacle crossing height of the robot main body 10 (e.g., socks, wires, etc.) during autonomous robot cleaning, but it is not necessary for the robot main body 10 to cross over them in order to avoid damage to such obstacles by the robot main body 10. Therefore, by adding the camera module 40 to obtain the image data on the forward route of the robot main body 10, the control module 30 can compare the image data obtained by the camera module 40 with the image of the preset obstacle, and the image data of the preset obstacle can be stored locally or in the cloud. In this case, whether the image data of the preset obstacle is stored locally or in the cloud, the update and addition/deletion can be performed. The comparison process and steps between the image collected by the camera module 40 and the preset obstacle can refer to the AI recognition diagram of the mobile terminal, and the AI recognition diagram function of the mobile terminal is widely applied at present, and is not repeated here. When the control module 30 identifies a preset obstacle from the image acquired by the camera module 40, the depth detection function of the depth sensor 20 may be used to locate the position of the preset obstacle in the moving map, and then the contour of the preset obstacle acquired from the local or cloud is constructed at the corresponding position of the moving map, so that the robot main body 10 may be controlled to perform an obstacle avoidance operation, that is, avoid the obstacle to be avoided. Illustratively, the predetermined obstacle may be a wire, a sock, a carpet, a socket, or the like.
Specifically, in the present embodiment, the camera module 40 is a color camera. In other embodiments, the camera module 40 can also be a black and white camera or an infrared camera.
Further, the depth sensor 20 is further configured to scan a profile of a boundary between the recessed environment and the traveling road surface when it is detected that a recessed environment occurs on the traveling road surface of the robot main body 10 and a height difference between the recessed environment and the traveling road surface is greater than the obstacle crossing height. It can be understood that, for the sweeping robot, when the drop from the recessed environment to the normal traveling road exceeds the obstacle crossing height of the robot main body 10, when the drop is small, the robot main body 10 cannot return to the recessed environment, and the execution of the sweeping task of the sweeping robot is affected; when the fall is large, the robot main body 10 is damaged after falling from the traveling road surface. Therefore, after the depth sensor 20 detects the profile of the boundary between the scanned recessed environment and the traveling road surface, the control module 30 obtains the profile and constructs a corresponding virtual obstacle in the motion map of the robot body 10 to control the robot body 10 to avoid the recessed environment. Note that, the obstacle crossing height of the robot main body 10 is different depending on the actual robot, and is not particularly limited here.
Further, the autonomous robot further includes a rim sensor 50, and the rim sensor 50 is disposed at a side of the robot main body 10; the control module 30 is further configured to determine a rotation angle of the robot main body 10 according to a detection signal of the depth sensor 20 when performing an obstacle avoidance operation, and control the robot main body 10 to rotate according to the rotation angle, so that the edge sensor 50 faces the virtual obstacle, and the robot main body 10 walks around the periphery of the obstacle through the edge sensor.
It should be noted that, when a conventional sweeping robot performs an edgewise sweeping operation after encountering an obstacle, it usually utilizes a collision sensor mounted on the edge of the robot. Specifically, after detecting the obstacle, the sweeping robot continues to move forward after rotating by a fixed angle, and continues to rotate by the fixed angle after colliding with the obstacle until the edge sensor 50 is aligned with the obstacle, and then performs edge sweeping. In the process, the sweeping robot can collide with the obstacles for many times, so that the process is very long and tedious, and the sweeping robot is easily damaged. In the autonomous robot of the present invention, the contour of the obstacle is acquired in advance by the depth sensor 20, and a virtual obstacle is constructed in the motion map of the autonomous robot main body 10 in accordance with the position of the obstacle. Then, the relative position relationship between the robot main body 10 and the obstacle can be obtained in real time, and meanwhile, the installation position of the edge sensor 50 on the robot main body 10 is fixed and unchanged, so that the control module 30 can reasonably calculate the rotation angle required by the robot main body 10 when the edge sensor 50 faces the obstacle. Here, since the relative positional relationship between the robot main body 10 and the obstacle can be obtained in real time, the rotation angle of the robot main body 10 can also be calculated in real time. Then, after the robot encounters an obstacle, the robot main body 10 only needs to rotate once to face the edgewise sensor 50 to the obstacle for the edgewise cleaning work. Also, the robot main body 10 does not collide with an obstacle during steering. Therefore, compared with a common sweeping robot, the autonomous robot has the advantages of high working efficiency and long service life.
Specifically, in this embodiment, the control module 30 is further configured to: when the obstacle avoidance action is executed, the robot main body 10 is controlled to decelerate to stop when the robot main body is away from the obstacle by a preset distance; after the robot main body 10 stops, an included angle between the obstacle and the edge sensor 50 is calculated, the included angle is used as the rotation angle, and the robot main body 10 is controlled to rotate the rotation angle. That is, in the present embodiment, the obstacle avoidance operation of the robot main body 10 is specifically performed by decelerating to a stop when approaching an obstacle by a preset distance, and then rotating at the rotation angle to continue the forward movement. Here, the control of the robot to stop before turning is performed for the convenience of calculating the turning angle of the robot main body 10 and for the convenience of the robot main body 10 performing the turning operation. Of course, the design of the present application is not limited thereto, and in other embodiments, the control module 30 may control the robot main body 10 to complete the steering at the same time of the deceleration, or control the robot main body 10 to directly steer without the deceleration.
Optionally, in this embodiment, the preset distance is at least 10 cm. It can be understood that if the preset distance is set to be too short, for example, less than 10 cm, the braking distance of the robot main body 10 is not enough, and the robot main body 10 may be too close to the obstacle after stopping, which is not favorable for the steering of the robot main body 10; or directly bump into the obstacle and cannot complete the obstacle avoidance action.
Specifically, in the present embodiment, the depth sensor 20 is disposed near the top of the robot main body 10 at the upper edge of the front side of the robot main body 10. It will be appreciated that this arrangement is advantageous to enhance the viewing angle range of the depth sensor 20. Of course, the design of the application is not limited thereto, and in other embodiments, the depth sensor 20 may be provided on the front side, the rear side, or the lateral side of the robot main body 10.
It should be noted that, in order to ensure that the camera module 40 and the depth sensor 20 have the same viewing angle, in the present embodiment, the camera module 40 and the depth sensor 20 are installed on the same side of the robot body 10 at an interval.
Further, in the present embodiment, the depth sensor 20 is provided at intervals on the robot main body 10. Therefore, the visual angle range of the autonomous robot can be increased, and the obstacle avoidance capability of the autonomous robot is improved.
The above is only the optional embodiment of the present invention, and not the scope of the present invention is limited thereby, all the equivalent structure changes made by the contents of the specification and the drawings are utilized under the inventive concept of the present invention, or the direct/indirect application in other related technical fields is included in the patent protection scope of the present invention.

Claims (10)

1. An autonomous robot, comprising:
a robot main body;
the depth sensor is arranged on the robot main body and used for identifying obstacles on the advancing route of the robot main body and scanning the outlines of the obstacles;
the control module is arranged on the robot main body and is electrically connected with the depth sensor;
the control module is used for controlling the robot main body to execute obstacle avoidance action based on the detection signal of the depth sensor.
2. The autonomous robot of claim 1, wherein the depth sensor has a preset obstacle height detection section having an upper limit height that is the installation height of the depth sensor plus a first preset distance, and a lower limit height that is the installation height of the depth sensor minus a second preset distance.
3. The autonomous robot of claim 2, wherein the first predetermined distance and/or the second predetermined distance is 90 millimeters.
4. The autonomous robot of claim 2, wherein the obstacle crossing height of the robot main body is higher than a lower limit height of the preset obstacle height detection section, and the control module is further configured to control the robot main body to continue to perform the forward movement when the obstacle height is not higher than the obstacle crossing height of the robot main body.
5. The autonomous robot of claim 4, further comprising a camera module for capturing image data on a path of travel of the robot body;
the control module is further electrically connected with the camera module and used for controlling the robot main body to execute obstacle avoidance actions based on detection signals of the camera module and detection signals of the depth sensor.
6. The autonomous robot of any one of claims 1 to 5, further comprising a edgewise sensor provided at a side of the robot main body;
the control module is further used for controlling the robot main body to rotate according to a detection signal of the depth sensor when the obstacle avoidance action is executed, so that the edge sensor faces the obstacle, and the robot main body walks around the periphery of the obstacle through the edge sensor.
7. The autonomous robot of claim 1, wherein the depth sensor is provided at a front side of the robot body, or wherein the depth sensor is provided near a top of the robot body at an upper edge of the front side of the robot body.
8. The autonomous robot of claim 1, wherein a plurality of the depth sensors are spaced apart on the robot body.
9. The autonomous robot of claim 1, wherein the depth sensor is a TOF sensor, a 3D structured light sensor, a binocular sensor, a lidar sensor, or a millimeter wave sensor.
10. The autonomous robot of claim 1, wherein the autonomous robot is a sweeping robot.
CN201922340482.6U 2019-12-20 2019-12-20 Autonomous robot Active CN211559963U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201922340482.6U CN211559963U (en) 2019-12-20 2019-12-20 Autonomous robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201922340482.6U CN211559963U (en) 2019-12-20 2019-12-20 Autonomous robot

Publications (1)

Publication Number Publication Date
CN211559963U true CN211559963U (en) 2020-09-25

Family

ID=72550297

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201922340482.6U Active CN211559963U (en) 2019-12-20 2019-12-20 Autonomous robot

Country Status (1)

Country Link
CN (1) CN211559963U (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113693493A (en) * 2021-02-10 2021-11-26 北京石头世纪科技股份有限公司 Regional map drawing method and device, medium and electronic equipment
CN114521849A (en) * 2020-11-20 2022-05-24 余姚舜宇智能光学技术有限公司 TOF optical system for sweeping robot and sweeping robot
CN114680736A (en) * 2020-12-29 2022-07-01 深圳乐动机器人有限公司 Control method of cleaning robot and cleaning robot
WO2023071967A1 (en) * 2021-10-29 2023-05-04 追觅创新科技(苏州)有限公司 Self-moving device, method for determining obstacle edge of self-moving device, and medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114521849A (en) * 2020-11-20 2022-05-24 余姚舜宇智能光学技术有限公司 TOF optical system for sweeping robot and sweeping robot
CN114680736A (en) * 2020-12-29 2022-07-01 深圳乐动机器人有限公司 Control method of cleaning robot and cleaning robot
CN113693493A (en) * 2021-02-10 2021-11-26 北京石头世纪科技股份有限公司 Regional map drawing method and device, medium and electronic equipment
CN113693493B (en) * 2021-02-10 2023-03-10 北京石头创新科技有限公司 Regional map drawing method and device, medium and electronic equipment
WO2023071967A1 (en) * 2021-10-29 2023-05-04 追觅创新科技(苏州)有限公司 Self-moving device, method for determining obstacle edge of self-moving device, and medium

Similar Documents

Publication Publication Date Title
CN110850885A (en) Autonomous robot
CN211559963U (en) Autonomous robot
US11871891B2 (en) Cleaning robot and controlling method thereof
JP2022546289A (en) CLEANING ROBOT AND AUTOMATIC CONTROL METHOD FOR CLEANING ROBOT
CN112327878B (en) Obstacle classification and obstacle avoidance control method based on TOF camera
EP3104194B1 (en) Robot positioning system
CN110852312B (en) Cliff detection method, mobile robot control method, and mobile robot
CN110908378B (en) Robot edge method and robot
CN112415998A (en) Obstacle classification and obstacle avoidance control system based on TOF camera
CN110554696B (en) Robot system, robot and robot navigation method based on laser radar
CN112004645A (en) Intelligent cleaning robot
CN112806912B (en) Robot cleaning control method and device and robot
CN113841098A (en) Detecting objects using line arrays
CN114052561A (en) Self-moving robot
CN114594482A (en) Obstacle material detection method based on ultrasonic data and robot control method
CN112423639A (en) Autonomous walking type dust collector
CN113848944A (en) Map construction method and device, robot and storage medium
CN113741441A (en) Operation method and self-moving equipment
US20230225580A1 (en) Robot cleaner and robot cleaner control method
CN116339299A (en) Obstacle avoidance equipment, obstacle avoidance method, obstacle avoidance device, electronic equipment and medium
KR20180080877A (en) Robot cleaner
CN112882472A (en) Autonomous mobile device
CN115122323A (en) Autonomous mobile device
CN112230643A (en) Mobile robot for detecting front obstacle and method thereof
EP4349234A1 (en) Self-moving device

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant