CN117665849A - Terrain detection method for robot, control method for robot, and storage medium - Google Patents

Terrain detection method for robot, control method for robot, and storage medium Download PDF

Info

Publication number
CN117665849A
CN117665849A CN202311724822.XA CN202311724822A CN117665849A CN 117665849 A CN117665849 A CN 117665849A CN 202311724822 A CN202311724822 A CN 202311724822A CN 117665849 A CN117665849 A CN 117665849A
Authority
CN
China
Prior art keywords
point
robot
preset
position point
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311724822.XA
Other languages
Chinese (zh)
Inventor
钟皇平
龚鼎
杜川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yunjing Intelligent Innovation Shenzhen Co ltd
Yunjing Intelligent Shenzhen Co Ltd
Original Assignee
Yunjing Intelligent Innovation Shenzhen Co ltd
Yunjing Intelligent Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yunjing Intelligent Innovation Shenzhen Co ltd, Yunjing Intelligent Shenzhen Co Ltd filed Critical Yunjing Intelligent Innovation Shenzhen Co ltd
Priority to CN202311724822.XA priority Critical patent/CN117665849A/en
Publication of CN117665849A publication Critical patent/CN117665849A/en
Pending legal-status Critical Current

Links

Landscapes

  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The embodiment of the application provides a terrain detection method and control method of a robot, the robot and a storage medium, wherein the method comprises the following steps: acquiring point clouds acquired by a laser sensor when a robot is on a working surface, wherein the plane of the working surface is a first preset plane; when the point cloud acquired by the robot at the current position comprises a first position point which is positioned below a first preset plane and has a distance from the first preset plane larger than or equal to a preset threshold value and comprises a second position point which has a distance from the first preset plane smaller than the preset threshold value, determining at least one target position point between the first position point and the second position point, wherein the distance from the target position point to the first preset plane is larger than or equal to the preset threshold value; and determining the position corresponding to the target position point as a cliff or a depression. The boundary between the height drop area such as cliffs or depressions and the working surface is more accurately determined by filling the target position point between the first position point and the second position point corresponding to the working surface.

Description

Terrain detection method for robot, control method for robot, and storage medium
Technical Field
The present disclosure relates to the field of robots, and in particular, to a terrain detection method, a control method, a robot, and a storage medium for a robot.
Background
In the related art, when a robot detects cliff scenes such as a low-lying area, a step area, a stair area and the like, the adopted sensor is basically an infrared sensor arranged at the bottom of the robot; when the infrared sensor detects the cliff, the robot may make certain specific actions to avoid these areas. The boundary of the height drop area cannot be effectively confirmed through the infrared sensor, for example, the outline of the height drop area can be outlined only by repeated detection on the boundary of the height drop area, and the efficiency is low.
Disclosure of Invention
The application provides a terrain detection method, a control method, a robot and a storage medium for a robot, wherein the boundary of a height drop area such as a cliff or a depression can be determined based on a laser sensor.
In a first aspect, an embodiment of the present application provides a terrain detection method for a robot, where the robot carries a laser sensor, and a detection direction of the laser sensor includes a direction of a height of the robot, and the method includes:
Acquiring point clouds acquired by the laser sensor when the robot is on a working surface, wherein the plane of the working surface is a first preset plane;
when the point cloud acquired by the robot at the current position comprises a first position point which is positioned below the first preset plane and has a distance from the first preset plane larger than or equal to a preset threshold value and comprises a second position point which has a distance from the first preset plane smaller than the preset threshold value, at least one target position point is determined between the first position point and the second position point, and the distance from the target position point to the first preset plane is larger than or equal to the preset threshold value;
and determining the position corresponding to the target position point as a cliff or a depression.
In a second aspect, an embodiment of the present application provides a control method of a robot, where the robot carries a laser sensor, and a detection direction of the laser sensor includes a direction of a height of the robot, and the method includes:
acquiring point clouds acquired by the laser sensor when the robot is on a working surface, wherein the plane of the working surface is a first preset plane;
when the point cloud acquired by the robot at the current position comprises a position point which is positioned below the first preset plane and the distance between the point cloud and the first preset plane is larger than or equal to a preset threshold value, controlling the robot to at least stop moving towards the position point.
In a third aspect, embodiments of the present application provide a robot carrying a laser sensor, the robot further comprising a processor and a memory, the memory being configured to store a computer program; the processor is configured to execute the computer program and when executing the computer program implement:
the steps of the terrain detection method of the robot, and/or
The control method of the robot comprises the steps.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium storing a computer program, which when executed by a processor causes the processor to implement the steps of the method described above.
The embodiment of the application provides a topography detection method, a control method, a robot and a storage medium of the robot, wherein the robot carries a laser sensor, the detection direction of the laser sensor comprises the direction of the height of the robot, and the method comprises the following steps: acquiring point clouds acquired by a laser sensor when a robot is on a working surface, wherein the plane of the working surface is a first preset plane; when the point cloud acquired by the robot at the current position comprises a first position point which is positioned below a first preset plane and has a distance from the first preset plane larger than or equal to a preset threshold value and comprises a second position point which has a distance from the first preset plane smaller than the preset threshold value, determining at least one target position point between the first position point and the second position point, wherein the distance from the target position point to the first preset plane is larger than or equal to the preset threshold value; and determining the position corresponding to the target position point as a cliff or a depression. The boundary between the height drop area such as a cliff or a depression and the working surface can be more accurately determined by determining the first position point of the height drop area according to the point cloud of the laser sensor and filling at least one target position point between the first position point and the second position point corresponding to the working surface so as to more accurately describe the topography of the area between the first position point and the second position point; and compare the infrared sensor in robot bottom need detect repeatedly on the border of altitude fall region just can outline the outline in altitude fall region, the efficiency of this application embodiment is higher.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure of embodiments of the present application.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a terrain detection method of a robot according to an embodiment of the present application;
FIG. 2 is a schematic block diagram of a cleaning robot in some embodiments of the present application;
FIG. 3 is a schematic view of a first preset plane and a point cloud in some embodiments of the present application;
FIG. 4 is a schematic illustration of a first location point and a second location point in some embodiments of the present application;
FIG. 5 is a schematic diagram of a robot coordinate system in some embodiments of the present application;
FIG. 6 is a schematic diagram of a first location point, a second location point, a third predetermined plane, and a fourth predetermined plane according to some embodiments of the present application;
FIG. 7 is a schematic illustration of a third predetermined plane, a fourth predetermined plane in some embodiments of the present application;
FIG. 8 is a schematic illustration of a target location point in some embodiments of the present application;
fig. 9 is a schematic diagram of a control method of a robot according to an embodiment of the present application;
fig. 10 is a schematic block diagram of a robot provided in an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
The flow diagrams depicted in the figures are merely illustrative and not necessarily all of the elements and operations/steps are included or performed in the order described. For example, some operations/steps may be further divided, combined, or partially combined, so that the order of actual execution may be changed according to actual situations.
Some embodiments of the present application are described in detail below with reference to the accompanying drawings. The following embodiments and features of the embodiments may be combined with each other without conflict.
Referring to fig. 1, fig. 1 is a flow chart of a terrain detection method for a robot according to an embodiment of the present application.
For example, the robot may be a cleaning robot, a service robot, or the like, but is not limited thereto, and may be a pet robot, or the like. Wherein cleaning robot refers to a cleaning device designed for cleaning, including but not limited to: a dust collector, a floor washing machine, a dust and water suction machine, a floor sweeping machine, a floor mopping machine, a sweeping and mopping integrated machine and the like.
For convenience of explanation, the embodiments of the present application will mainly be described with reference to a robot as a cleaning robot.
Fig. 2 is a schematic block diagram of a cleaning robot in an embodiment. The cleaning robot includes a robot body, a driving motor 102, a sensor unit 103, a controller 104, a cleaning member 105, a traveling unit 106, a memory 107, a communication unit 108, an interaction unit 109, an energy storage unit 110, and the like.
The sensor unit 103 provided on the robot body may include at least one of the following sensors: lidar, collision sensors, distance sensors, fall sensors, counters, gyroscopes, etc. For example, the lidar is arranged on the top or the periphery of the robot body, and in operation, surrounding environmental information such as the distance and angle of the obstacle to the lidar can be obtained. The crash sensors include, for example, a crash shell and a trigger sensor; when the cleaning robot collides with an obstacle through the collision housing, the collision housing moves toward the inside of the cleaning robot, and compresses the elastic buffer. After the collision housing has moved a certain distance into the cleaning robot, the collision housing is brought into contact with a trigger sensor, which is triggered to generate a signal, which can be sent to a controller 104 in the robot body for processing. After the obstacle is bumped, the cleaning robot is far away from the obstacle, and the collision shell moves back to the original position under the action of the elastic buffer piece. It can be seen that the collision sensor detects an obstacle and, when it collides against the obstacle, it acts as a buffer. The distance sensor may specifically be an infrared detection sensor, and may be used to detect the distance of an obstacle to the distance sensor. The distance sensor may be provided at a side of the robot body so that a distance value of an obstacle located near the side of the cleaning robot to the distance sensor can be measured by the distance sensor. The distance sensor may be an ultrasonic distance sensor, a laser distance sensor, a depth sensor, or the like. The falling sensor is arranged at the bottom edge of the robot main body, and when the cleaning robot moves to the edge position of the ground, the falling sensor can detect the risk that the cleaning robot falls from a high place, so that corresponding anti-falling reaction is performed, for example, the cleaning robot stops moving, or moves in a direction away from the falling position, and the like. The inside of the robot main body is also provided with a counter and a gyroscope. The counter is used for detecting the distance length of the cleaning robot. The gyroscope is used for detecting the rotating angle of the cleaning robot, so that the direction of the cleaning robot can be determined.
The controller 104 is provided inside the robot main body, and the controller 104 is used to control the cleaning robot to perform a specific operation. The controller 104 may be, for example, a central processing unit (Central Processing Unit, CPU), a Microprocessor (Microprocessor), or the like. As shown in fig. 2, the controller 104 is electrically connected to the energy storage unit 110, the memory 107, the driving motor 102, the traveling unit 106, the sensor unit 103, the interaction unit 109, the cleaning member 105, and the like to control these components.
The cleaning members 105 may be used to clean the floor, and the number of cleaning members 105 may be one or more. The cleaning member 105 comprises, for example, a mop. The mop cloth comprises, for example, at least one of the following: the rotary mop, flat mop, roller mop, crawler mop, etc., are of course not limited thereto. The mop is arranged at the bottom of the robot main body, and can be specifically a position of the bottom of the robot main body, which is at a rear position. Taking a cleaning piece as a rotary mop for example, a driving motor 102 is arranged in the robot main body, two rotating shafts extend out of the bottom of the robot main body, and the mop is sleeved on the rotating shafts. The drive motor 102 may rotate the shaft, which in turn rotates the mop.
The traveling unit 106 is a component related to the movement of the cleaning robot, and the traveling unit 106 includes, for example, a driving wheel and a universal wheel. The universal wheels and the driving wheels are matched to realize the steering and the movement of the cleaning robot.
A memory 107 is provided on the robot body, and a program is stored on the memory 107, which when executed by the controller 104, realizes a corresponding operation. The memory 107 is also used to store parameters for use by the cleaning robot. The Memory 107 includes, but is not limited to, a magnetic disk Memory, a compact disk read Only Memory (CD-ROM), an optical Memory, and the like.
A communication unit 108 provided on the robot main body, the communication unit 108 for allowing the cleaning robot to communicate with external devices; for example with a terminal or with a base station. Wherein the base station is a cleaning device for use with a cleaning robot.
The interaction unit 109 is provided on the robot main body, and a user can interact with the cleaning robot through the interaction unit 109. The interaction unit 109 includes, for example, at least one of a touch screen, a switch button, a speaker, and the like. For example, the user can control the cleaning robot to start or stop by pressing a switch button.
The energy storage unit 110 is disposed inside the robot body, and the energy storage unit 110 is used to provide power for the cleaning robot.
The robot body is further provided with a charging part for acquiring power from an external device to charge the energy storage unit 110 of the cleaning robot.
It should be understood that the cleaning robot depicted in fig. 2 is only one specific example in the embodiments of the present application, and is not meant to limit the robot configuration in the embodiments of the present application in any way. The robot of the embodiment of the application can also be in other specific implementation manners. In other implementations, the cleaning robot may have more or fewer components than the cleaning robot shown in fig. 2; for example, the cleaning robot may include a clean water chamber for storing clean water and/or a recovery chamber for storing dirt, the cleaning robot may transfer the clean water stored in the clean water chamber to the mop and/or the floor to wet the mop, and clean the floor based on the wet mop, and the cleaning robot may further collect dirt of the floor or sewage containing dirt into the recovery chamber; the cleaning robot can also convey clean water stored in the clean water chamber to the cleaning piece so as to clean the cleaning piece, and dirty sewage containing dirt after cleaning the cleaning piece can also be conveyed to the recovery chamber.
The terrain detection method of the robot provided by the embodiment of the application is described in detail below.
As shown in fig. 1, the terrain detection method of the robot according to an embodiment of the present application includes steps S110 to S130.
Step S110, acquiring point clouds acquired by the robot through the laser sensor when the robot is on a working surface, wherein the plane of the working surface is a first preset plane.
Specifically, as shown in fig. 3, the robot carries a laser sensor 310, and the detection direction of the laser sensor 310 includes the direction of the robot height.
In some embodiments, the laser sensor 310 is a line structure photosensor, as shown in fig. 3, where the detection range of the line structure photosensor is a sector, and a two-dimensional point cloud in the sector can be obtained; the sector being, for example, perpendicular to the working surface on which the robot is located; optionally, the line structure light sensor may swing along a horizontal direction, or the robot drives the line structure light sensor to move, so as to obtain a three-dimensional point cloud. In other embodiments, laser sensor 310 may be an area array laser sensor, such as a multi-line lidar, that may directly output a three-dimensional point cloud.
For convenience of description, in the embodiment of the present application, the laser sensor 310 is mainly used as a line structure light sensor, and a two-dimensional point cloud in the height direction is obtained for illustration.
For example, referring to fig. 3, the robot is located on a working surface (such as the ground, a table top, etc.), and the plane of the working surface may be referred to as a first preset plane; when the laser sensor 310 is a line structured light sensor, the scan plane of the laser sensor 310 may be perpendicular to the working surface of the robot; the scan plane of the laser sensor 310 may be referred to as a second preset plane.
Alternatively, the laser sensor 310 may be disposed at a side or top of the robot, so that the detection direction of the laser sensor 310 may include the direction of the height of the robot.
For example, the line structure light sensor may be mounted to the left side, the right side, the front side, or the rear side of the robot according to a measurement range of the line structure light sensor; of course, the present invention is not limited thereto, and may be mounted on the top of a robot, for example. Optionally, the linear structure light sensor is mounted on the left side or the right side of the robot, so that the detection of the topography on the left side or the right side can be realized; or the line structure light sensor is mounted on the front side of the robot, so that the detection of the terrain on the front side can be realized.
The laser sensor 310 emits laser light to the object, receives the laser light returned by the object, and determines the distance between the position on the object and the laser sensor 310 according to the time difference between the emitted laser light and the received laser light; if the returned laser light is not received, it can be determined that there is no object or that the distance of the object is great in the direction in which the laser light is emitted. The laser sensor 310 may detect a distance between a position of an object in a direction in which laser light is emitted and a sensor optical center, and determine a position point in the direction according to the distance corresponding to the direction when the distance between the position and the sensor optical center is less than or equal to a maximum range value of the laser sensor 310; when the distance between the position and the optical center of the sensor exceeds the maximum range value, determining a position point in the direction according to the direction and the maximum range value, so that the distance corresponding to the position point in the direction is equal to the maximum range value. A plurality of position points obtained by the laser sensor 310 in a plurality of different directions may be used as a point cloud obtained by the laser sensor when the robot is on a work surface.
As shown in fig. 3, the distances between each position in the direction in which the laser sensor 310 emits laser light and the optical center of the sensor exceed the maximum range value, and then the distances corresponding to each position point in different directions in the point cloud are equal to the maximum range value, and each position point in different directions is arranged in a fan shape relative to the laser sensor 310.
As shown in fig. 4, the distances corresponding to the position point a, the position point b and the position point c in the point cloud are actually detected distances, for example, the position point a, the position point b and the position point c can be determined as the positions on the working plane; the corresponding distances of the position point d, the position point e, the position point f and the like in the point cloud are equal to the maximum range value. For example, the data output by the laser sensor may be marked by the corresponding identifier on the position point corresponding to the maximum range value, for example, the position point corresponding to the maximum range value is marked as the maximum range point; the maximum range point occurs when the laser sensor irradiates an open area.
In some embodiments, the acquiring a point cloud acquired by the laser sensor while the robot is working on the surface includes: acquiring sensor data of the laser sensor when the robot is on the working surface; and determining the point cloud under a robot coordinate system of the robot according to the sensor data.
Referring to fig. 5, a coordinate transformation matrix between a sensor coordinate system of the laser sensor and a robot coordinate system of the robot may be determined according to an installation position of the laser sensor on the robot body, and a mathematical method is used to convert a point cloud output by the laser sensor into a point cloud under the robot coordinate.
As shown in fig. 5, the laser sensor is installed at the side of the robot, and the sensor coordinate system corresponding to the laser sensor has a translation and a rotation relative to the robot coordinate system; the original data given by the laser sensor is in a sensor coordinate system, and the robot can process the original data conveniently by converting the original data given by the laser sensor into the robot coordinate system. The embodiment of the application will also mainly be described by taking the processing of the point cloud under the robot coordinate system as an example.
Optionally, the origin of the robot coordinate system is located in the first preset plane. Referring to fig. 6 in conjunction with fig. 5, an origin O of a robot coordinate system is located at a geometric center of a bottom of the robot, and coincides with a working surface on which the robot is located and the first preset plane. The detection of cliffs or depressions can be facilitated.
Alternatively, as shown in fig. 5, the positive X-axis direction of the robot coordinate system coincides with the positive robot direction, and the line structure light sensor is mounted to the right side of the robot, so that detection of cliffs or depressions on the right side can be achieved, which is not limited to this.
Step S120, when the point cloud acquired by the robot at the current position includes a first position point located below the first preset plane and having a distance from the first preset plane greater than or equal to a preset threshold value, and includes a second position point having a distance from the first preset plane less than the preset threshold value, determining at least one target position point between the first position point and the second position point, where a distance from the target position point to the first preset plane is greater than or equal to the preset threshold value.
As shown in fig. 6, the point cloud includes a first position point d located below the first preset plane and having a distance from the first preset plane greater than or equal to a preset threshold, and further includes a second position point a, a second position point b, and a second position point c having a distance from the first preset plane less than the preset threshold.
The second position point with the distance smaller than the preset threshold value from the first preset plane can be determined as the position point on the working plane; the first location point located below the first preset plane and having a distance from the first preset plane greater than or equal to a preset threshold value may be determined as a location point below the working plane, and an area where the first location point is located may be determined as a cliff or a depression, or may be referred to as a height drop area.
For example, the cliffs or depressions may have included at least one of the following: the threshold, step, pool, of course, are not limited thereto.
Alternatively, the preset threshold may be determined according to the flatness of the working surface, for example, the smaller the preset threshold is, the more uneven the working surface is, the larger the preset threshold is. The preset threshold is smaller, for example, when the work surface is a carpet, and larger when the work surface is a carpet. By determining a suitable preset threshold value in dependence of the flatness of the work surface, the accuracy of detection of cliffs or depressions may be improved, e.g. to prevent identification of places of irregularities on the work surface as cliffs or depressions.
For example, referring to fig. 7 in conjunction with fig. 6, the origin 0 of the robot coordinate system is located in the first preset plane, and the preset threshold may include a first preset threshold and a second preset threshold. The plane which is positioned below the first preset plane and has the distance from the first preset plane equal to the first preset threshold value can be determined as a third preset plane, and the plane which is positioned above the first preset plane and has the distance from the first preset plane equal to the second preset threshold value can be determined as a fourth preset plane.
The second position point is located between a third preset plane and a fourth preset plane corresponding to the first preset plane, that is, the second position point may include a position point located on the first preset plane, may further include a position point located below the first preset plane and having a distance from the first preset plane smaller than the first preset threshold (that is, a position point between the first preset plane and the third preset plane), and may further include a position point located above the first preset plane and having a distance from the first preset plane smaller than the second preset threshold (that is, a position point between the first preset plane and the fourth preset plane).
The first position point is located below the third preset plane, that is, the first position point is located below the first preset plane and the distance between the first position point and the first preset plane is greater than or equal to a first preset threshold value, and it can be determined that the area where the first position point is located is a cliff or a depression; optionally, the third preset plane may be referred to as a cliff threshold plane, and the first preset threshold may be referred to as a cliff height threshold, or a maximum threshold that the robot can cross.
Alternatively, a location point above the fourth preset plane, that is, a location point above the first preset plane and having a distance from the first preset plane greater than or equal to a second preset threshold value may be determined as a location point of an obstacle, for example, it may be determined that a suspended obstacle, such as a bed edge, exists in an area where the location point is located; it is also possible to control the robot at least to keep the distance from the location point unchanged or to control the robot away from the location point. Alternatively, the fourth preset plane may be referred to as an obstacle threshold plane, and the second preset threshold may be referred to as an obstacle height threshold.
As shown in fig. 6, a larger area between the first position point d and the second position points (a, b, c) does not determine the terrain, and the boundary between the working surface and the height drop area such as cliffs or depressions corresponding to the first position point d is not determined yet; if the robot continues to move in the direction of the first position point d, it is possible to drop down the work surface and to the cliff or depression. Referring to fig. 8, in the embodiment of the present application, at least one target location point may be determined between the first location point and the second location point, so as to more accurately describe the terrain of the area between the first location point and the second location point, so that the boundary between the height drop area such as a cliff or a depression and the working surface may be determined more accurately, and compared with the infrared sensor at the bottom of the robot, the contour of the height drop area may be outlined by repeatedly detecting the boundary of the height drop area.
In some embodiments, referring to fig. 8, the distance between the projection of the target position point on the first preset plane and the current position of the robot is smaller than the distance between the projection of the first position point d on the first preset plane and the current position, and is larger than the distance between the projection of the second position point c on the first preset plane and the current position. The target position point is located between a first position point d and a second position point c, for example in the Y-axis direction of the robot coordinate system, such that the determined at least one of said target position points is between said first position point and said second position point.
Illustratively, the second predetermined plane is perpendicular to the working surface of the robot and parallel to the exit direction (e.g., right side) of the laser sensor. As shown in fig. 6, the positive X-axis direction of the robot coordinate system coincides with the positive robot direction, the Z-axis is the height direction, the Y-axis is parallel to the emission direction of the laser sensor, and the second preset plane may be the YOZ plane of the robot coordinate system.
For example, when the laser sensor is a line structure light sensor, a point cloud on the second preset plane may be obtained; when the laser sensor is a multi-line laser radar, the point cloud on the second preset plane can be obtained from the three-dimensional point cloud.
Illustratively, the step S120 includes: when the point cloud on the second preset plane comprises a first position point which is positioned below the first preset plane and has a distance from the first preset plane larger than or equal to a preset threshold value and comprises a second position point which has a distance from the first preset plane smaller than the preset threshold value, at least one target position point is determined on the second preset plane, and the projection of the target position point on the first preset plane is positioned between the projections of the first position point and the second position point on the first preset plane.
Referring to fig. 8 in conjunction with fig. 6, at least one target position point may be determined between a first position point d and a second position point c of the YOZ plane of the robot coordinate system. The target position point is determined on the second preset plane, so that the position constraint of the first position point and the second position point actually detected by the target position point laser detector can be realized, and for example, the erroneous judgment of cliffs or depressions caused by more deviation of the determined target position point from the first position point and the second position point in the X-axis direction of the robot coordinate system can be prevented.
Optionally, the distance H between the target location point and the first preset plane is greater than or equal to the preset threshold, that is, the area where the target location point is located is a height drop area such as a cliff or a depression compared to the working surface, and the target location point may be referred to as a virtual cliff depression location point.
Specifically, the distance H between the determined target position point and the first preset plane may be a distance that is greater than a first preset threshold and specified according to actual needs.
In some embodiments, the distance H between the target location point and the first preset plane is determined according to the distance between the first location point near the second location point and the first preset plane.
As shown in fig. 8, the distance H between the target position point and the first preset plane on which the working surface is located is equal to the distance between the first position point d and the first preset plane. Of course, the distance H between the target position point and the first preset plane on which the working surface is located is not limited thereto, and may be greater than a first preset threshold and less than or equal to a distance between the first position point near the second position point and the first preset plane.
When the number of the target position points is plural, the distances H between the different target position points and the first preset plane may be the same or different. For example, the smaller the distance H corresponding to the target position point closer to the second position point, the larger the distance H corresponding to the target position point closer to the first position point d.
In some embodiments, the determining at least one target location point between the first location point and the second location point comprises: determining at least one target location point between a first location point proximate to the second location point and a second location point proximate to the first location point; the distance between the adjacent target position points is equal to a preset distance, and/or the distance between the first position point close to the second position point and the adjacent target position point is equal to a preset distance, and/or the distance between the second position point close to the first position point and the adjacent target position point is equal to a preset distance.
For example, referring to fig. 8, a plurality of target location points may be determined between a first location point d and a second location point c at equal intervals of the preset distance; for example, the distance between any two adjacent target position points of the plurality of target position points is equal to the preset distance. By determining a plurality of target position points at equal intervals, the height drop area between the first position point d and the working surface can be accurately filled, and the height drop areas where the plurality of target position points are located can be determined to be cliffs or depressions and the like. By controlling the robot to stop moving at least in the direction of the target position point, the robot can be prevented from falling down the work surface, and falling down to a cliff or depression.
For example, referring to fig. 8, determining at least one target location point between the first location point and the second location point includes: starting from a first position point d close to the second position points (a, b and c), determining a plurality of interpolation points on the laser projection ray corresponding to the first position point d at a preset interpolation distance until the distance between the last interpolation point and the first preset plane is smaller than or equal to an end threshold; determining a target plane according to a distance (e.g. H) between a first position point d close to the second position points (a, b, c) and the first preset plane, wherein the target plane is parallel to the first preset plane; and determining the target position point according to the projection of the interpolation point on the target plane.
For example, the interpolation point is obtained by interpolating the laser projection ray corresponding to the cliff or the depression, for example, the laser projection ray corresponding to the first position point d by a preset interpolation distance. Alternatively, the preset interpolation distance may be determined according to a ratio of a minimum distance between the second position point and the first position point (a distance between the projection of the second position point c and the first position point d on the first preset plane) to a preset interpolation resolution (e.g. 11 interpolation points).
For example, from the end (the end far from the laser sensor) of the laser projection ray corresponding to the first position point d, the interpolation is performed until the distance between the Z-axis height and the first preset plane is less than or equal to an end threshold, for example, 0 or 1% to 10% of the first preset threshold, according to the preset interpolation distance.
As shown in fig. 8, according to the projection of the interpolation point on the target plane, determining the target position point, that is, the distance between the target position point and the first preset plane is equal to the distance between the first position point d and the first preset plane; for example, the Z-axis height of each interpolation point is adjusted to be equal to the distance between the first position point d and the first preset plane, so as to obtain a target position point corresponding to each interpolation point. Of course, the method is not limited thereto, for example, the target plane is the third preset plane, and the Z-axis height of each interpolation point is adjusted to the Z-axis height of the third preset plane; or the target plane is positioned below the third preset plane, and the Z-axis height of each interpolation point is adjusted to be lower than the Z-axis height of the third preset plane.
As shown in fig. 8, by determining a plurality of target position points between the first position point d and the second position point c and filling the gap between the actually detected first position point d and the actually detected second position point c, it is possible to determine that the areas where the plurality of target position points are located are all height fall areas such as cliffs or depressions.
And step S130, determining the position corresponding to the target position point as a cliff or a depression.
For example, the target position point may be added to the point cloud when the map is constructed by the data of the laser sensor, and the position corresponding to the target position point is marked as cliffs or depressions; when the robot executes a preset task according to the constructed map, the robot can accurately determine the region of the cliff or the depression according to the target position point during map traversal detection so as to avoid the region.
For example, when the robot performs a preset task (such as a floor cleaning task), the target location point may be determined in real time through step S110 and step S120, and the location corresponding to the target location point may be determined as a cliff or a depression; by stopping at least the movement in the direction of the target position point, the robot can be prevented from falling down the work surface, to the cliff or depression.
By way of example, the robot can determine the boundary between the cliff or the depression and the working surface according to the target position point, and can accurately execute a preset task on the working surface along the boundary, thereby improving the efficiency and safety of the task along the boundary.
According to the terrain detection method for the robot, the robot is provided with the laser sensor, the detection direction of the laser sensor comprises the direction of the height of the robot, and the method comprises the following steps: acquiring point clouds acquired by a laser sensor when a robot is on a working surface, wherein the plane of the working surface is a first preset plane; when the point cloud acquired by the robot at the current position comprises a first position point which is positioned below a first preset plane and has a distance from the first preset plane larger than or equal to a preset threshold value and comprises a second position point which has a distance from the first preset plane smaller than the preset threshold value, determining at least one target position point between the first position point and the second position point, wherein the distance from the target position point to the first preset plane is larger than or equal to the preset threshold value; and determining the position corresponding to the target position point as a cliff or a depression. The boundary between the height drop area such as a cliff or a depression and the working surface can be more accurately determined by determining the first position point of the height drop area according to the point cloud of the laser sensor and filling at least one target position point between the first position point and the second position point corresponding to the working surface so as to more accurately describe the topography of the area between the first position point and the second position point; and compare the infrared sensor in robot bottom need detect repeatedly on the border of altitude fall region just can outline the outline in altitude fall region, the efficiency of this application embodiment is higher.
The range of the laser sensor such as the structural light sensor is larger, for example, the range is more than 10 cm, the robot can find the cliff or the depression without being too close to the cliff or the depression, and then corresponding actions are made to avoid, so that the processing efficiency of the cliff and the depression is greatly improved.
Referring to fig. 9 in combination with the above embodiments, fig. 9 is a flow chart of a control method of a robot according to an embodiment of the present application.
Specifically, as shown in fig. 3, the robot carries a laser sensor 310, and the detection direction of the laser sensor 310 includes the direction of the robot height.
As shown in fig. 9, the control method of the robot includes steps S210 to S220.
Step S210, acquiring point clouds acquired by the robot through the laser sensor when the robot is on a working surface, wherein the plane of the working surface is a first preset plane.
In some embodiments, the acquiring a point cloud acquired by the laser sensor while the robot is working on the surface includes: acquiring sensor data of the laser sensor when the robot is on the working surface; and determining the point cloud under a robot coordinate system of the robot according to the sensor data.
Step S220, when the point cloud acquired by the robot at the current position includes a position point located below the first preset plane and a distance between the point cloud and the first preset plane is greater than or equal to a preset threshold, controlling the robot to at least stop moving towards the position point.
In some embodiments, a location point located below the first preset plane and having a distance from the first preset plane greater than or equal to a preset threshold may be referred to as a first location point. As shown in fig. 6, the point cloud includes a first location point d located below the first preset plane and having a distance from the first preset plane greater than or equal to a preset threshold.
The first location point located below the first preset plane and having a distance from the first preset plane greater than or equal to a preset threshold value may be determined as a location point below the working plane, and an area where the first location point is located may be determined as a cliff or a depression, or may be referred to as a height drop area.
For example, the cliffs or depressions may have included at least one of the following: the threshold, step, pool, of course, are not limited thereto.
By controlling the robot to stop moving at least in the direction of the first location point, the robot can be prevented from falling down the work surface, and the robot can be prevented from falling down to the cliff or depression at the first location point.
The range of the laser sensor such as the structural light sensor is larger, for example, the range is more than 10 cm, the robot can find the cliff or the depression without being too close to the cliff or the depression, and then corresponding actions are made to avoid, so that the processing efficiency of the cliff and the depression is greatly improved.
In some embodiments, step S220 includes: when the point cloud acquired by the robot at the current position comprises a first position point which is positioned below the first preset plane and has a distance from the first preset plane larger than or equal to a preset threshold value and comprises a second position point which has a distance from the first preset plane smaller than the preset threshold value, at least one target position point is determined between the first position point and the second position point, and the distance from the target position point to the first preset plane is larger than or equal to the preset threshold value; and controlling the robot to at least stop moving towards the direction of the target position point.
The second position point with the distance smaller than the preset threshold value from the first preset plane can be determined as the position point on the working plane.
The origin of the robot coordinate system is located in the first preset plane, and the second position point, of which the distance between the origin and the first preset plane is smaller than the preset threshold value, is located between a third preset plane and a fourth preset plane corresponding to the first preset plane; the third preset plane is positioned below the first preset plane and the distance from the first preset plane is equal to a first preset threshold value, and the fourth preset plane is positioned above the first preset plane and the distance from the first preset plane is equal to a second preset threshold value; the first position point is located below the third preset plane.
As shown in fig. 6, a larger area between the first position point d and the second position points (a, b, c) does not determine the terrain, and the boundary between the working surface and the height drop area such as cliffs or depressions corresponding to the first position point d is not determined yet; if the robot continues to move in the direction of the first position point d, it is possible to drop down the work surface and to the cliff or depression. Referring to fig. 8, in the embodiment of the present application, by determining at least one target location point between the first location point and the second location point, the topography of the area between the first location point and the second location point may be described more accurately, and the boundary between the height drop area such as cliffs or depressions and the working surface may be determined more accurately. By controlling the robot to stop moving at least in the direction of the target location point, cliffs or depressions can be avoided more timely to prevent the robot from falling down the work surface and from falling down to the cliffs or depressions at the first location point.
In some embodiments, the distance between the projection of the target location point on the first preset plane and the current location is less than the distance between the projection of the first location point on the first preset plane and the current location and greater than the distance between the projection of the second location point on the first preset plane and the current location.
In some embodiments, the second preset plane is perpendicular to the working surface of the robot and parallel to the exit direction of the laser sensor; when the point cloud acquired by the robot at the current position includes a first position point located below the first preset plane and having a distance from the first preset plane greater than or equal to a preset threshold value, and includes a second position point having a distance from the first preset plane less than the preset threshold value, determining at least one target position point between the first position point and the second position point includes: when the robot is in a point cloud acquired by the current position, the position points on the second preset plane comprise first position points which are positioned below the first preset plane and have a distance from the first preset plane larger than or equal to a preset threshold value, and comprise second position points which have a distance from the first preset plane smaller than the preset threshold value, at least one target position point is determined on the second preset plane, and the projection of the target position point on the first preset plane is positioned between the projections of the first position point and the second position point on the first preset plane.
In some embodiments, the distance between the target location point and the first preset plane is determined from the distance between the first location point near the second location point and the first preset plane.
Illustratively, said determining at least one target location point between said first location point and said second location point comprises: starting from a first position point close to the second position point, determining a plurality of interpolation points on a laser projection ray corresponding to the first position point by a preset interpolation distance until the distance between the last interpolation point and the first preset plane is smaller than or equal to an end threshold; determining a target plane according to the distance between a first position point close to the second position point and the first preset plane, wherein the target plane is parallel to the first preset plane; and determining the target position point according to the projection of the interpolation point on the target plane.
In some embodiments, the determining at least one target location point between the first location point and the second location point comprises: determining at least one target location point between a first location point proximate to the second location point and a second location point proximate to the first location point; the distance between the adjacent target position points is equal to a preset distance, and/or the distance between the first position point close to the second position point and the adjacent target position point is equal to a preset distance, and/or the distance between the second position point close to the first position point and the adjacent target position point is equal to a preset distance.
Referring to fig. 10 in combination with the above embodiments, fig. 10 is a schematic block diagram of a robot according to an embodiment of the present application.
The robot carries a laser sensor 310. In some embodiments, the detection direction of the laser sensor 310 includes a direction of the robot height.
The robot may further comprise a walking unit, which is a component related to the movement of the robot, for example comprising driving wheels and universal wheels. The universal wheel and the driving wheel are matched to realize the steering and the movement of the robot. Of course, the present invention is not limited to this, and may be, for example, a crawler-type or foot-type walking unit.
The robot further includes: a processor 301 and a memory 302, the memory 302 being for storing a computer program.
The processor 301 and the memory 302 are illustratively connected by a bus 303, such as an I2C (Inter-integrated Circuit) bus, for example.
Specifically, the processor 301 may be a Micro-controller Unit (MCU), a central processing Unit (Central Processing Unit, CPU), a digital signal processor (Digital Signal Processor, DSP), or the like.
Specifically, the Memory 302 may be a Flash chip, a Read-Only Memory (ROM) disk, an optical disk, a U-disk, a removable hard disk, or the like.
The processor 301 is configured to execute a computer program stored in the memory 302, and implement steps of a terrain detection method of the robot and/or implement steps of a control method of the robot when the computer program is executed.
The specific principles and implementation manners of the robot provided in the embodiments of the present application are similar to those of the foregoing embodiments, and are not repeated here.
Embodiments of the present application also provide a computer-readable storage medium storing a computer program which, when executed by a processor, causes the processor to implement the steps of the method of any of the embodiments described above.
The computer readable storage medium may be an internal storage unit of the robot according to any one of the foregoing embodiments, for example, a hard disk or a memory of the robot. The computer readable storage medium may also be an external storage device of the robot, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the robot.
It is to be understood that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
It should also be understood that the term "and/or" as used in this application and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
While the invention has been described with reference to certain preferred embodiments, it will be understood by those skilled in the art that various changes and substitutions of equivalents may be made and equivalents will be apparent to those skilled in the art without departing from the scope of the invention. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (12)

1. A terrain detection method of a robot, wherein the robot carries a laser sensor, and a detection direction of the laser sensor includes a direction of a height of the robot, the method comprising:
acquiring point clouds acquired by the laser sensor when the robot is on a working surface, wherein the plane of the working surface is a first preset plane;
when the point cloud acquired by the robot at the current position comprises a first position point which is positioned below the first preset plane and has a distance from the first preset plane larger than or equal to a preset threshold value and comprises a second position point which has a distance from the first preset plane smaller than the preset threshold value, at least one target position point is determined between the first position point and the second position point, and the distance from the target position point to the first preset plane is larger than or equal to the preset threshold value;
And determining the position corresponding to the target position point as a cliff or a depression.
2. The terrain detection method of claim 1, wherein a distance between a projection of the target location point on the first preset plane and the current location is less than a distance between a projection of the first location point on the first preset plane and the current location and greater than a distance between a projection of the second location point on the first preset plane and the current location.
3. The terrain detection method of claim 1, wherein the point cloud acquired by the robot at the current position comprises a point cloud on a second preset plane, the second preset plane being perpendicular to a working surface of the robot and parallel to an exit direction of the laser sensor;
when the point cloud acquired by the robot at the current position includes a first position point located below the first preset plane and having a distance from the first preset plane greater than or equal to a preset threshold value, and includes a second position point having a distance from the first preset plane less than the preset threshold value, determining at least one target position point between the first position point and the second position point includes:
When the point cloud on the second preset plane comprises a first position point which is positioned below the first preset plane and has a distance from the first preset plane larger than or equal to a preset threshold value and comprises a second position point which has a distance from the first preset plane smaller than the preset threshold value, at least one target position point is determined on the second preset plane, and the projection of the target position point on the first preset plane is positioned between the projections of the first position point and the second position point on the first preset plane.
4. A terrain detection method according to any of claims 1-3, characterized in that the distance between the target location point and the first preset plane is determined from the distance between the first location point close to the second location point and the first preset plane.
5. The terrain detection method of claim 4 wherein said determining at least one target location point between said first location point and said second location point comprises:
starting from a first position point close to the second position point, determining a plurality of interpolation points on a laser projection ray corresponding to the first position point by a preset interpolation distance until the distance between the last interpolation point and the first preset plane is smaller than or equal to an end threshold;
Determining a target plane according to the distance between a first position point close to the second position point and the first preset plane, wherein the target plane is parallel to the first preset plane;
and determining the target position point according to the projection of the interpolation point on the target plane.
6. A terrain detection method according to any of claims 1-3, characterized in that said determining at least one target location point between said first location point and said second location point comprises:
determining at least one target location point between a first location point proximate to the second location point and a second location point proximate to the first location point; the distance between the adjacent target position points is equal to a preset distance, and/or the distance between the first position point close to the second position point and the adjacent target position point is equal to a preset distance, and/or the distance between the second position point close to the first position point and the adjacent target position point is equal to a preset distance.
7. A terrain detection method according to any of claims 1-3, characterized in that the acquisition robot acquires a point cloud via the laser sensor at the working surface, comprising:
Acquiring sensor data of the laser sensor when the robot is on the working surface;
and determining the point cloud under a robot coordinate system of the robot according to the sensor data.
8. The terrain detection method of claim 7, wherein an origin of the robot coordinate system is located in the first preset plane, and a second position point, which is located between the first preset plane and a third preset plane and a fourth preset plane corresponding to the first preset plane, is located at a distance smaller than the preset threshold; the third preset plane is positioned below the first preset plane and the distance from the first preset plane is equal to a first preset threshold value, and the fourth preset plane is positioned above the first preset plane and the distance from the first preset plane is equal to a second preset threshold value;
the first position point is located below the third preset plane.
9. A control method of a robot, wherein the robot carries a laser sensor, and a detection direction of the laser sensor includes a direction of a height of the robot, the method comprising:
acquiring point clouds acquired by the laser sensor when the robot is on a working surface, wherein the plane of the working surface is a first preset plane;
When the point cloud acquired by the robot at the current position comprises a position point which is positioned below the first preset plane and the distance between the point cloud and the first preset plane is larger than or equal to a preset threshold value, controlling the robot to at least stop moving towards the position point.
10. The control method according to claim 9, wherein when the point cloud acquired by the robot at the current position includes a position point located below the first preset plane and having a distance from the first preset plane that is greater than or equal to a preset threshold value, controlling the robot to stop moving at least in the direction of the position point includes:
when the point cloud acquired by the robot at the current position comprises a first position point which is positioned below the first preset plane and has a distance from the first preset plane larger than or equal to a preset threshold value and comprises a second position point which has a distance from the first preset plane smaller than the preset threshold value, at least one target position point is determined between the first position point and the second position point, and the distance from the target position point to the first preset plane is larger than or equal to the preset threshold value;
And controlling the robot to at least stop moving towards the direction of the target position point.
11. A robot carrying a laser sensor, the robot further comprising a processor and a memory for storing a computer program; the processor is configured to execute the computer program and when executing the computer program implement:
a method of terrain detection for a robot as claimed in any of claims 1 to 8, and/or
The method for controlling a robot according to any one of claims 9 to 10.
12. A computer readable storage medium storing a computer program which, when executed by a processor, causes the processor to implement:
a method of terrain detection for a robot as claimed in any of claims 1 to 8, and/or
The method for controlling a robot according to any one of claims 9 to 10.
CN202311724822.XA 2023-12-14 2023-12-14 Terrain detection method for robot, control method for robot, and storage medium Pending CN117665849A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311724822.XA CN117665849A (en) 2023-12-14 2023-12-14 Terrain detection method for robot, control method for robot, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311724822.XA CN117665849A (en) 2023-12-14 2023-12-14 Terrain detection method for robot, control method for robot, and storage medium

Publications (1)

Publication Number Publication Date
CN117665849A true CN117665849A (en) 2024-03-08

Family

ID=90071255

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311724822.XA Pending CN117665849A (en) 2023-12-14 2023-12-14 Terrain detection method for robot, control method for robot, and storage medium

Country Status (1)

Country Link
CN (1) CN117665849A (en)

Similar Documents

Publication Publication Date Title
US11013385B2 (en) Automatic cleaning device and cleaning method
CN111035327B (en) Cleaning robot, carpet detection method, and computer-readable storage medium
CN108852184B (en) Non-blind area sweeping robot based on deep learning algorithm and sweeping control method thereof
CN110403539B (en) Cleaning control method for cleaning robot, and storage medium
US10517456B2 (en) Mobile robot and method of controlling the same
CN110313863B (en) Autonomous mobile cleaning machine, cleaning method for autonomous mobile cleaning machine, and program
CN110313867B (en) Autonomous mobile cleaner, cleaning method for autonomous mobile cleaner, and recording medium
EP2540203B1 (en) Robot cleaner and control method thereof
EP2677386B1 (en) Robot cleaner and obstacle detection control method of the same
CN110477820B (en) Obstacle following cleaning method for cleaning robot, and storage medium
JP2022546289A (en) CLEANING ROBOT AND AUTOMATIC CONTROL METHOD FOR CLEANING ROBOT
JP2015534048A (en) Robot positioning system
KR101938703B1 (en) Robot cleaner and control method for the same
JP2020038665A (en) Navigation of autonomous mobile robots
CN110495825B (en) Obstacle crossing method for cleaning robot, and storage medium
JP2005135400A (en) Self-propelled working robot
KR20230010575A (en) Method for controlling traveling of self-cleaning device, device, system, and storage medium
CN211933898U (en) Cleaning robot
CN114601399B (en) Control method and device of cleaning equipment, cleaning equipment and storage medium
CN113693505B (en) Obstacle avoidance method and device for sweeping robot and storage medium
CN113848944A (en) Map construction method and device, robot and storage medium
CN117665849A (en) Terrain detection method for robot, control method for robot, and storage medium
CN217792839U (en) Automatic cleaning equipment
CN117784781A (en) Robot control method, robot, and storage medium
JP2020099461A (en) Autonomously travelling type cleaner

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination