CN113552589A - Obstacle detection method, robot, and storage medium - Google Patents

Obstacle detection method, robot, and storage medium Download PDF

Info

Publication number
CN113552589A
CN113552589A CN202010252466.6A CN202010252466A CN113552589A CN 113552589 A CN113552589 A CN 113552589A CN 202010252466 A CN202010252466 A CN 202010252466A CN 113552589 A CN113552589 A CN 113552589A
Authority
CN
China
Prior art keywords
obstacle
sensor
determining
robot
moment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010252466.6A
Other languages
Chinese (zh)
Inventor
刘毅
郭斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Ezviz Software Co Ltd
Original Assignee
Hangzhou Ezviz Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Ezviz Software Co Ltd filed Critical Hangzhou Ezviz Software Co Ltd
Priority to CN202010252466.6A priority Critical patent/CN113552589A/en
Publication of CN113552589A publication Critical patent/CN113552589A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B62LAND VEHICLES FOR TRAVELLING OTHERWISE THAN ON RAILS
    • B62DMOTOR VEHICLES; TRAILERS
    • B62D57/00Vehicles characterised by having other propulsion or other ground- engaging means than wheels or endless track, alone or in addition to wheels or endless track
    • B62D57/02Vehicles characterised by having other propulsion or other ground- engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members
    • B62D57/032Vehicles characterised by having other propulsion or other ground- engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members with alternately or sequentially lifted supporting base and legs; with alternately or sequentially lifted feet or skid
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01DMEASURING NOT SPECIALLY ADAPTED FOR A SPECIFIC VARIABLE; ARRANGEMENTS FOR MEASURING TWO OR MORE VARIABLES NOT COVERED IN A SINGLE OTHER SUBCLASS; TARIFF METERING APPARATUS; MEASURING OR TESTING NOT OTHERWISE PROVIDED FOR
    • G01D5/00Mechanical means for transferring the output of a sensing member; Means for converting the output of a sensing member to another variable where the form or nature of the sensing member does not constrain the means for converting; Transducers not specially adapted for a specific variable
    • G01D5/12Mechanical means for transferring the output of a sensing member; Means for converting the output of a sensing member to another variable where the form or nature of the sensing member does not constrain the means for converting; Transducers not specially adapted for a specific variable using electric or magnetic means
    • G01D5/14Mechanical means for transferring the output of a sensing member; Means for converting the output of a sensing member to another variable where the form or nature of the sensing member does not constrain the means for converting; Transducers not specially adapted for a specific variable using electric or magnetic means influencing the magnitude of a current or voltage
    • G01D5/16Mechanical means for transferring the output of a sensing member; Means for converting the output of a sensing member to another variable where the form or nature of the sensing member does not constrain the means for converting; Transducers not specially adapted for a specific variable using electric or magnetic means influencing the magnitude of a current or voltage by varying resistance
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only

Abstract

The present application provides an obstacle detection method, a robot, and a storage medium, the method being applied to a multi-legged robot whose foot is provided with a sensor module, the method including: acquiring a detection signal of a sensor module of a foot part of the multi-legged robot in the process that the foot part of the multi-legged robot advances towards a first direction along a first track; and determining whether an obstacle exists according to the detection signal of the sensor module. Above-mentioned scheme is owing to set up the sensor module in the foot of robot, has utilized the flexibility of the shank of robot, and the testing result is comparatively accurate.

Description

Obstacle detection method, robot, and storage medium
Technical Field
The present application relates to the field of artificial intelligence technologies, and in particular, to an obstacle detection method, a robot, and a storage medium.
Background
Currently, wheeled or tracked robotics has begun to enter home applications, such as various floor sweeping robots, floor mopping robots, window wiping robots, and the like. Wheel or tracked robots, however, have limited obstacle-surmounting capabilities, particularly for wall-climbing applications, whereas legged robots have good obstacle-surmounting capabilities.
The existing obstacle detection scheme mainly aims at wheeled robots, for example, stereoscopic vision or laser radar is adopted, for example, as shown in fig. 1, a laser radar 5 is placed at the top of a robot body 10, so that a high raised obstacle 1 can be detected, but a low raised obstacle 2 or a depressed obstacle 3 cannot be detected; if the stereoscopic vision module 4 is added to the side of the robot body and the stereoscopic vision module 4 is formed with a certain downward inclination, the low convex obstacle 2 or the concave obstacle 3 can be detected. If the above-described stereoscopic vision scheme is applied to the foot robot, the leg 11 of the foot robot may block the view of the sensor, resulting in inaccurate detection results.
Disclosure of Invention
The application provides an obstacle detection method, a robot and a storage medium, so as to improve the accuracy of obstacle detection.
In a first aspect, the present application provides an obstacle detection method applied to a multi-legged robot, a foot of the multi-legged robot being provided with a sensor module, the method including:
acquiring a detection signal of a sensor module of a foot part of the multi-legged robot in the process that the foot part of the multi-legged robot advances towards a first direction along a first track;
and determining whether an obstacle exists according to the detection signal of the sensor module.
In a second aspect, the present application provides a multi-legged robot comprising:
the robot comprises a robot body and a plurality of legs;
the robot body is provided with a processor, and the foot part at the tail end of each leg is provided with a sensor module;
the processor is configured to implement the method of any one of claims 1-13.
In a third aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the method of any one of the first aspect.
In a fourth aspect, an embodiment of the present application provides an electronic device, including:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any of the first aspects via execution of the executable instructions.
According to the obstacle detection method, the robot and the storage medium provided by the embodiment of the application, the sensor module is arranged on the foot of the multi-legged robot, in the process that the foot of the multi-legged robot advances towards the first direction along the first track, whether an obstacle exists or not is determined according to the detection signal of the sensor module on the foot, the sensor module is arranged on the foot at the tail end of the leg, the maximum moving space and flexibility of the leg can be fully utilized by the sensor module, the position of the foot cannot be shielded by the leg in the moving process of the leg of the robot, and the obstacle around the foot can be accurately detected.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a prior art obstacle detection scenario;
FIG. 2 is a schematic view of a multi-legged robot according to an embodiment of the present application;
FIG. 3 is a diagram illustrating an application scenario according to an embodiment of the present application;
FIG. 4 is a schematic flow chart diagram illustrating an embodiment of an obstacle detection method provided herein;
FIG. 5 is a schematic diagram illustrating a low protrusion obstacle detection principle according to an embodiment of the method provided by the present application;
FIG. 6 is a schematic diagram illustrating a high bump obstacle detection principle according to an embodiment of the method provided by the present application;
FIG. 7 is a schematic view of a foot configuration according to an embodiment of the present application;
fig. 8 is a relative positional relationship between the robot and the obstacle according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a sensor module detection principle according to an embodiment of the present application;
FIG. 10 is a schematic diagram of a high bump obstacle detection principle of another embodiment of the method provided by the present application;
FIG. 11 is a schematic diagram of a high bump obstacle detection principle of yet another embodiment of the method provided by the present application;
FIG. 12 is a schematic view of a sensor module detection principle according to another embodiment of the present application;
FIG. 13 is a schematic diagram of three-dimensional information of an obstacle according to an embodiment of the present application;
FIG. 14 is a schematic diagram of the principle of data fitting according to an embodiment of the present application;
FIG. 15 is a three-dimensional schematic view of a raised barrier according to an embodiment of the present application;
FIG. 16 is a three-dimensional schematic view of a sunken barrier of an embodiment of the present application;
FIG. 17 is a schematic diagram of an application scenario of another embodiment of the present application;
FIG. 18 is a schematic view of a 3D map according to an embodiment of the present application;
FIG. 19 is a schematic illustration of a leg of a robot according to an embodiment of the present application;
FIG. 20 is an enlarged schematic view of a portion of the area of FIG. 19;
fig. 21 is a schematic view of a suction cup of a leg of a robot according to an embodiment of the present application.
With the foregoing drawings in mind, certain embodiments of the disclosure have been shown and described in more detail below. These drawings and written description are not intended to limit the scope of the disclosed concepts in any way, but rather to illustrate the concepts of the disclosure to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The terms "comprising" and "having," and any variations thereof, in the description and claims of this application and the drawings described herein are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Firstly, the application scenario related to the present application is introduced:
the method provided by the embodiment of the application is applied to the legged robot, such as climbing a wall and other moving scenes, so as to improve the accuracy of obstacle detection.
The technical idea of the method of the embodiment of the application is as follows:
as shown in fig. 1, due to interference of the legs, the detection effect of the obstacle is poor, in order to enable the legs of the robot not to shield the sensors for detecting the obstacle in the moving process and accurately detect different types of obstacles, the sensor module is arranged on the foot at the tail end of the legs, and the sensor module can fully utilize the maximum movement space and flexibility of the legs. The position of the foot is not shielded by the leg in the moving process of the leg of the robot, so that the obstacle around the foot can be accurately detected, and the three-dimensional information of the obstacle and the size of the obstacle can be acquired.
In order to accurately detect the three-dimensional information of the height or depth of the obstacle, in one embodiment, a distance measuring sensor, such as a Time of Flight (TOF) distance measuring sensor, is disposed at the bottom of the foot of the legged robot, the TOF distance measuring sensor emits modulated near infrared light, and the modulated near infrared light is reflected after encountering the obstacle, and the distance between the distance measuring sensor and the obstacle is calculated by calculating the Time difference or the phase difference between the light emission and the reflection.
In one embodiment, in order to identify a higher obstacle, the foot is further provided with an impact sensor, and when the foot impacts the obstacle, an impact trigger signal is generated,
as shown in fig. 2, taking a four-legged robot as an example, the multi-legged robot includes: a robot body 10, a leg 11 connected to the robot body 10, a suction cup 12 as a foot, and a sensor module provided on the foot.
In one embodiment, the sensor module includes an impact sensor 13 disposed around the foot, and a ranging sensor 14 disposed in the center of the bottom of the foot, wherein a light beam 15 from the ranging sensor is also shown. The TOF ranging sensor can be arranged at the center of the bottom surface of the sucker, so that the TOF light beam is perpendicular to the end surface of the sucker. The distance is detected, for example, by a distance measuring sensor, and the presence or absence of an obstacle can be determined by comparing the distances at different times, or whether an obstacle is hit is determined by the intensity value of the detection signal of the collision sensor.
In other embodiments, the sensor module may also include, for example, a stereo vision module, a laser radar, a millimeter wave radar, etc., but the implementation of the above modules is somewhat complicated, costly, and bulky, and the placement on the foot may affect the movement of the leg and the absorption effect of the foot.
In other embodiments, a fan 16 mounted to the robot body 10 may also be included. The fan can increase the adhesion of the robot by pumping air from the belly or blowing air to the back, so that the robot is safer when moving, and particularly, the reliability of action can be ensured for walking and obstacle crossing on a vertical plane.
In other embodiments of the present application, the multi-legged robot may also be two-legged, three-legged, or more.
In an embodiment, a four-legged robot is taken as an example for description, and the robot motion control principle is described:
assuming that each leg has 4 actively controlled joints and each leg has 4 controllable degrees of freedom, the motion plan takes the center point of the end surface of the suction cup (i.e., the end surface in contact with the plane to be sucked) as the target control point. The control quantity is composed of position coordinates of three XYZ space orthogonal directions of the target control point and an included angle of one sucker end face relative to the abdomen plane of the robot body. By planning the space position of the sucker at the tail end of each leg of the robot and the included angle between the sucker and the plane to be adsorbed, the robot can realize various motion modes, such as plane walking, plane obstacle crossing, crossing between two planes (forming a certain angle) and the like.
When the model analysis is carried out, a single leg can be equivalent to a serial mechanical arm, and the control angles on 4 active control joints can be calculated through inverse kinematics solution. In order to guarantee that the sucking disc can reliably adsorb on treating the adsorption plane, the shank end can be equipped with passive joint, when the terminal sucking disc of shank is about to adsorb on the plane, press down perpendicularly through preceding 4 active control joint control sucking discs, make the sucking disc terminal surface hardly with treat the adsorption plane and keep parallel owing to measurement or control error at the in-process that pushes down, passive joint can take place verting of certain angle by the atress at this moment, make the terminal certain angle error of self-adaptation of leg ensure that the sucking disc can accomplish reliable absorption, when the shank is unsettled (the sucking disc is in non-adsorption state), under this passive joint's the effect of mechanism of returning, this passive joint is in reset state or is the zero-bit. The leg structure, as well as the structure of the passive joint, are described in detail in the following embodiments.
Because each leg of the robot is under-actuated (namely, each leg has a passive joint), when the vertical plane motion planning is carried out, extra constraint is added by means of the cooperation of a plurality of legs or by utilizing the abdominal plane of the robot body so as to ensure that the robot has a determined motion track.
In one embodiment, a walking mode with three-foot support can be adopted, and the specific principle is as follows:
three legs are adopted for adsorption support each time, one leg is used for stepping, and four legs are used for stepping in sequence to finish the stepping in one period. This way it is not necessary to resort to the belly plane of the robot body as a constraining surface. Since the spatial position of the ends of the three legs is determined, the spatial position of the robot will be fully constrained. Therefore, the walking mode has strong environmental adaptability.
In one embodiment, a walking mode with diagonal double-foot support can be adopted, and the specific principle is as follows:
the two opposite-angle feet are used for adsorption support each time, the other two opposite-angle legs are used for walking, and the two groups of legs are alternately carried out. As shown in fig. 3, for example, the leg 111 and the leg 113 are taken first, and the robot body 10, the leg 111, and the leg 113 are supported on the walking plane. This approach requires the use of the ventral plane of the robot body as a plane of constraint. Because the two legs support, a virtual axis passing through the tail ends of the two legs can be led out, and the whole robot can rotate around the axis. Therefore, when the robot walks in the mode, the abdomen plane of the robot body is required to be tightly attached to the walking plane or the surface of an obstacle, and then stable walking can be realized. Compared with a walking mode of three-foot support, the walking speed of the mode is higher, and the efficiency is higher.
The technical solution of the present application will be described in detail below with specific examples. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
Fig. 4 is a schematic flowchart of an embodiment of an obstacle detection method provided in the present application. As shown in fig. 4, the method provided by the present embodiment is applied to a multi-legged robot, a foot of the multi-legged robot is provided with a sensor module, and the method includes:
step 101, acquiring a detection signal of a sensor module of a foot part in the process that the foot part of the multi-legged robot moves forward along a first track in a first direction.
Specifically, as shown in fig. 3, for example, a walking mode with diagonal feet support is adopted, the direction pointed by the arrow indicates a first direction in which the robot advances, and assuming that the leg 111 advances along a first track, the sensor module of the foot of the leg 111 continuously detects an obstacle during the movement of the leg 111 to acquire a detection signal. Similarly, the leg 113 may continuously detect an obstacle through the sensor module of the foot during the movement, and acquire a detection signal. On the next step, leg 112 and leg 114 are detected.
In one embodiment, for more accurate acquisition of information of an obstacle, each leg may be controlled to follow a plurality of first trajectories, and the information of the obstacle may be determined by detecting signals on different first trajectories.
In fig. 3, the leg 111 and the leg 113 are stepped forward, and the leg 112 and the leg 114 are attracted to a plane while the robot body 10 is moved forward. The leg 111 keeps the suction cup end surface parallel to the abdominal plane of the robot body 10 during the step.
And step 102, determining whether an obstacle exists according to the detection signal of the sensor module.
In one embodiment, the sensor module includes, for example, a distance measuring sensor, and when the foot moves above the front sunken barrier 3, the distance detected according to the detection signal of the distance measuring sensor is larger than the distance detected on the walking plane before, indicating that there is a sunken barrier. For example, as shown in fig. 5, when the foot moves above the low-protrusion obstacle 2 in front, if the distance detected from the detection signal of the distance measuring sensor is smaller than the distance detected on the walking plane before, it is said that there is a protrusion obstacle.
In one embodiment, as shown in fig. 6, during the forward movement of the leg, the height of the foot may be lower than that of the obstacle, and the obstacle may not be detected by the ranging sensor, the detection signal of the collision sensor may be used, and if the intensity value of the detection signal of the collision sensor is a preset value, the obstacle is hit if the intensity value of the detection signal suddenly increases.
According to the method, the sensor module is arranged on the foot of the multi-legged robot, whether an obstacle exists or not is determined through detection signals of the sensor module of the foot when the foot of the multi-legged robot moves towards the first direction along the first track, the sensor module is arranged on the foot at the tail end of the leg, the sensor module can fully utilize the maximum moving space and flexibility of the leg, the position of the foot cannot be shielded by the leg when the leg of the robot moves, and the obstacle around the foot can be accurately detected.
On the basis of the above embodiment, as shown in fig. 2, the sensor module includes: a TOF ranging sensor 14 disposed on the bottom of the foot, and an impact sensor 13 disposed on the foot, step 102 may be implemented as follows:
determining whether an obstacle exists or not based on a detection signal of the collision sensor; and/or the presence of a gas in the gas,
and determining whether the obstacle exists according to the detection signal of the TOF ranging sensor.
In an embodiment, the distance detected by the TOF ranging sensor is determined according to a detection signal of the TOF ranging sensor, and if the distance detected by the TOF ranging sensor is determined to jump, it is determined that an obstacle exists.
As shown in fig. 3, when the foot moves above the front sunken barrier 3, if the distance detected from the detection signal of the distance measuring sensor is larger than the distance detected on the walking plane, the sunken barrier is present. For example, as shown in fig. 5, when the foot moves above the low-protrusion obstacle 2 in front, if the distance detected from the detection signal of the distance measuring sensor is smaller than the distance detected on the walking plane before, it is said that there is a protrusion obstacle.
In one embodiment, if it is determined that the intensity value of the detection signal of the collision sensor jumps, it is determined that an obstacle exists.
As shown in fig. 6, during the forward movement of the leg, the height of the foot may be lower than that of the obstacle, and the obstacle may not be detected by the ranging sensor, the detection signal of the collision sensor may be used, and if the intensity value of the detection signal of the collision sensor is a preset value, the collision with the obstacle is indicated if the intensity value of the detection signal suddenly increases.
In an embodiment, the strength value of the detection signal of the collision sensor is determined to jump in the following specific manner:
if the difference between the intensity value of the detection signal of the collision sensor at the first moment and the preset intensity value is larger than a first preset threshold value, determining that the intensity value of the detection signal of the collision sensor jumps; or the like, or, alternatively,
and if the difference between the intensity values of the detection signals of the collision sensor at the first moment and the adjacent previous moment is larger than a second preset threshold value, determining that the intensity value of the detection signal of the collision sensor jumps.
Specifically, the intensity value of the detection signal acquired by the collision sensor when the collision sensor does not collide with the obstacle may be set as a reference intensity value (that is, a preset intensity value), the intensity value of the detection signal acquired by the collision sensor during the movement of the foot portion may be compared with the reference intensity value, and if the intensity value changes, it may be determined that the collision with the obstacle occurs, and the presence of the obstacle may be determined.
Alternatively, the intensity values of the detection signals acquired at the time points adjacent to each other may be compared, and if the difference between the intensity values of the detection signals at the time points adjacent to each other is large, it may be determined that the obstacle is present by indicating that the obstacle has collided with the detection signal. For the purpose of distinguishing from the time in the following, the time at which the collision occurs is referred to as the first time.
In one embodiment, as shown in fig. 7, the impact sensor includes an elastic body 131, an impact ring 132, and a resistance strain gauge (not shown), which may be, for example, a thin film type. The resistance strain gauge may be attached to a surface of the elastic body that contacts the collision ring. The elastic body 131 is installed at the outer side of the suction cup. The collision ring can be made of elastic materials or other materials.
When the collision ring 132 collides with an obstacle, the elastic body 131 deforms, the resistance value of the resistance strain gauge changes accordingly, and the collision with the obstacle can be detected by the change in the resistance value.
The first preset threshold and the second preset threshold may be the same or different.
In one embodiment, the specific manner of determining an obstacle by a TOF ranging sensor is as follows:
if the difference value between the distance detected by the TOF ranging sensor and the preset distance is larger than a third preset threshold value at the second moment, determining that the distance detected by the TOF ranging sensor jumps; or the like, or, alternatively,
and if the difference value between the distance of the second moment and the distance of the adjacent previous moment is larger than a fourth preset threshold value, determining that the distance detected by the TOF ranging sensor jumps.
Specifically, the distance is obtained according to the detection signal of the TOF detection sensor, and the distance is the distance between the TOF ranging sensor and the top surface or the bottom surface of the object reflected by the TOF light beam, after the leg is lifted to a preset height, the distance between the TOF ranging sensor and the walking plane is used as a reference distance (namely a preset distance), the distance of the detection signal obtained by the TOF ranging sensor in the moving process of the foot is compared with the reference distance, and if the distance changes, the obstacle is detected.
Alternatively, the distances acquired at the preceding and succeeding adjacent times may be compared, and if the difference between the distances at the preceding and succeeding adjacent times is large, it may be said that an obstacle is present. For example, as shown in fig. 5, during the foot moving process, if the TOF ranging sensor at the previous adjacent time is located above the walking plane and the TOF ranging sensor at the second time is located above the low-protrusion obstacle 2, the difference between the distances at the previous adjacent time and the second time is large, that is, the distance at the second time suddenly decreases, which indicates that there is a protrusion obstacle.
The third preset threshold and the fourth preset threshold may be the same or different.
In one embodiment, the robot keeps the end surface of the sucker parallel to the walking plane all the time during walking, i.e. controls the TOF light beam to be perpendicular to the walking plane. When the top surface or the bottom surface of the walking plane is flat, because each leg alternately takes a step, when each leg is lifted, translated in front and landed, the distance data sampling obtained by the TOF ranging sensor presents approximate periodic square waves. The data can be used as a set of reference data, and the data is different from the reference data when the walking plane changes or the motion servo module of the leg is abnormal (the TOF light beam cannot be perpendicular to the walking plane).
The robot can be through the difference of real-time detection acquisition's distance data and benchmark data in the walking in-process, on the one hand, can monitor the unusual state of robot itself, for example the connecting piece is become flexible or damage, and the motion private clothes module execution of shank is unusual etc. on the other hand, can monitor the anomaly of environment, for example the walking plane has gap 3, has drop 6, has the section or the inclination 7 etc. as shown in fig. 3.
In the above specific embodiment, the sensor module at the end of the leg includes the collision sensor and the single-point TOF ranging sensor, so that the surrounding environment can be flexibly sensed, and the obstacle can be accurately detected.
On the basis of the above embodiment, further, after the obstacle is detected, three-dimensional information of the obstacle can be acquired, so that subsequent obstacle crossing or electronic map building can be facilitated to use, and the method can be specifically realized in the following manner:
controlling the foot to advance along the first trajectory toward the first direction;
in the process that the foot part moves forwards along the first track, if the distance detected by the TOF ranging sensor jumps at a third moment, determining the three-dimensional coordinates of each sampling point of the top surface or the bottom surface of the obstacle on the first track according to the three-dimensional coordinates of the TOF ranging sensor at each sampling point on the first track between the second moment and the third moment and the distance between the TOF ranging sensor and the top surface or the bottom surface of the obstacle between the first moment and the second moment, wherein the three-dimensional coordinates are coordinates of a three-dimensional coordinate system with the body of the multi-legged robot as an origin.
Specifically, as shown in fig. 5, after the obstacle 2 is detected, the foot is continuously controlled to advance in the first direction along the first trajectory, that is, to move in the arrow direction. The relative position of the obstacle 2 and the robot is as shown in fig. 8, and the obstacle 2 is assumed to be a rectangular parallelepiped. And establishing a coordinate system on the robot body, and moving the robot along the Y direction.
At this time, the TOF beam in the leg end TOF ranging sensor is perpendicular to the abdominal plane of the robot body, the X, Z coordinate of the TOF ranging sensor is kept unchanged, the top surface of the obstacle is scanned forward in the Y direction, and the first trajectory corresponds to a broken line defined by two points AB in fig. 8. This process yields TOF distance data and Y coordinate position data as shown in fig. 9:
where "Y" represents the Y-axis coordinate of the TOF ranging sensor. The TOF ranging sensor moves from back to front (along the Y axis), and distance data between the top of the obstacle and the TOF ranging sensor can be continuously scanned and acquired and recorded. Time t2And time t3Corresponding to the boundaries in the width direction (in the direction of motion of the sensor), appear as jumps in the distance data. At the same time, the time t can be obtained2And time t3The corresponding Y-axis coordinate and the coordinate difference reflect the width of the barrier; at time t2And time t3The distance data between the two points reflect the outline information of the top of the obstacle, and the height of the obstacle relative to the coordinate origin of the robot can be obtained by calculating the difference of the distances. In one embodiment, the top profile of the obstacle may not be planar, at time t2And time t3The distance data between may fluctuate.
The above three-dimensional coordinates that can be measured by the TOF ranging sensor, and the time t2And time t3And determining the three-dimensional coordinates of each sampling point of the top surface or the bottom surface of the obstacle along the first track according to the distance data between the top surface and the bottom surface of the obstacle.
X, Y coordinates of each sampling point of the top surface or the bottom surface of the obstacle on the first track are the same as X, Y coordinates of the TOF ranging sensor at each sampling point of the first track; the Z coordinate of each sampling point of the top surface or the bottom surface of the obstacle on the first track can be obtained by the Z coordinate of the TOF ranging sensor at each sampling point of the first track and the distance between the TOF ranging sensor and the top surface or the bottom surface of the obstacle.
Above time t2Can be regarded as a second moment of time, moment t3Which may be considered a third moment.
The depth of the concave obstacle and the three-dimensional coordinates of each sampling point of the concave obstacle along the first track can be obtained in a similar manner as described above for the concave obstacle.
In another embodiment, if the height of the obstacle is higher than the height of the foot during the moving along the first track, before the step "controlling the foot to advance along the first track toward the first direction", as shown in fig. 10, the method further includes the following steps:
controlling the foot to advance towards a second direction perpendicular to the first direction along a second track until the distance detected by the TOF ranging sensor jumps at a fourth moment;
and determining the three-dimensional coordinates of each sampling point of the side surface of the obstacle on the second track according to the three-dimensional coordinates of the TOF ranging sensor at each sampling point of the second track and the three-dimensional coordinates of the TOF ranging sensor at the first moment.
In one embodiment, when the collision sensor detects an obstacle collision during the robot walking, as shown in fig. 10 and 11, the foot may be rotated, i.e. the posture of the ranging sensor is adjusted, so that the TOF beam in the TOF ranging sensor at the end of the leg 111 is parallel to the abdominal plane of the robot body, keeping the X, Y coordinate of the TOF ranging sensor unchanged, and the TOF ranging sensor scans the side of the obstacle upwards along the Z direction, as shown by the dashed-line locus in fig. 11. This process may yield TOF distance data and Z coordinate data as shown in fig. 12.
In fig. 12, "distance" indicates distance data detected by the TOF ranging sensor, and "Z" indicates a Z coordinate of the TOF ranging sensor. The TOF ranging sensor moves from bottom to top, and can continuously scan distance data. At t1At time (i.e., the fourth time), the TOF ranging sensor detects the altitude boundary, and the distance data jumps, and at this time, the corresponding Z coordinate is recorded. In one embodiment, if there are a plurality of boundaries, the maximum value of the Z coordinate is taken to calculate the boundary value corresponding to the height direction of the obstacle, i.e. the height of the obstacle.
And then lifting the TOF ranging sensor of the foot to a certain height according to the height of the obstacle, wherein the height is greater than the height of the obstacle. And the posture of the TOF ranging sensor is adjusted to ensure that the TOF light beam is perpendicular to the abdominal plane of the robot body, namely the posture of the original TOF ranging sensor is recovered.
Further, the three-dimensional coordinates of the TOF ranging sensor at each sampling point of the second track can be further determined, and t1And the three-dimensional coordinates of each sampling point on the second track of the side surface of the obstacle can be obtained through the three-dimensional coordinates of the time TOF ranging sensor.
X, Z coordinates of each sampling point of the side face of the obstacle on the second track are the same as X, Z coordinates of the TOF ranging sensor at each sampling point of the second track; the Y coordinate of each sampling point of the side face of the obstacle on the second track can be obtained through the Y coordinate of the TOF ranging sensor at each sampling point of the second track and the distance between the TOF ranging sensor and the side face of the obstacle.
In other embodiments, the foot part is directly over high without rotating the foot part after the collision sensor collides with the obstacle, so that the foot part is higher than the obstacle, and the foot part is controlled to advance along the first track in the first direction to monitor the three-dimensional information of the obstacle.
Further, through the above-mentioned process of foot movement, the height (or depth) and/or width of the obstacle can also be obtained, in an embodiment, the height is obtained as follows:
mode 1: and determining the height or the depth of the obstacle according to the difference between the distances of the second time or the third time and the adjacent previous time.
Mode 2: and determining the height of the obstacle according to the three-dimensional coordinate corresponding to the TOF ranging sensor at the fourth moment.
When the robot encounters an obstacle in the walking process, the obstacle generally has the following conditions:
1) the height of the obstacle is lower
When a certain leg (such as the leg 111) in the advancing direction of the robot is swept over the obstacle in the process of stepping the leg, as shown in fig. 5, when the obstacle is detected, namely, the distance jumps, the height of the obstacle can be directly calculated by respectively measuring the distance from the TOF ranging sensor to the walking plane and the distance from the TOF ranging sensor to the top surface of the obstacle and then taking the difference.
2) The height of the obstacle is higher
One leg in the forward direction of the robot will touch the obstacle side during stepping, as shown in fig. 10:
when the leg 111 hits a front obstacle during the step, the collision sensor around its suction cup will be triggered. After finding the obstacle, the robot body may stop advancing, and in the stationary state of the robot body, the end of the leg 111 is rotated 90 degrees around the last joint to point the TOF beam horizontally forward, i.e. parallel to the walking plane. The end of the leg 111 is controlled to move along the direction perpendicular to the abdominal plane of the robot body (as shown by the arrow with a dotted line in fig. 10), so that the side information of the obstacle can be scanned, including the height of the obstacle, i.e. the height boundary is scanned when the distance jumps (at the fourth moment).
For a recessed barrier, the depth of the recessed barrier may be determined in a similar manner as described above.
In one embodiment, the width is obtained as follows:
acquiring the moving distance of the foot along the first track between the second moment and the third moment;
determining the width of the obstacle along the first track according to the distance moved along the first track.
In one embodiment, as shown in fig. 5 and 9, after the obstacle is found, the time t is2When the distance jumps, the current leg 111 continues to detect forward until time t3When the distance jumps again, the time t2And time t3The foot movement distance therebetween, the width of the obstacle is obtained.
In one embodiment, the multi-legged robot can be controlled to cross the obstacle according to the height and the width of the obstacle, and obstacle avoidance and obstacle crossing functions are achieved.
On the basis of the above embodiment, further, after the three-dimensional coordinates of each sampling point of the obstacle are acquired, the following operations may be performed:
performing data fitting according to the three-dimensional coordinates of each sampling point of the top surface or the bottom surface of the obstacle on at least two first tracks;
and acquiring the three-dimensional information of the obstacle according to the fitted data.
Specifically, three-dimensional coordinates of each sampling point on a plurality of first tracks are collected, and data fitting is performed to obtain three-dimensional information of the obstacle. In one embodiment, one leg, for example, the leg 111, may be controlled to move along a plurality of first tracks to acquire three-dimensional coordinates of each sampling point on the plurality of first tracks, or a plurality of legs, for example, the leg 111 and the leg 112, may be controlled to move along a plurality of first tracks, for example, as shown in fig. 8 and 13, the leg 111 moves along a first track of an AB line and the leg 112 moves along a first track of a CD line. For example, after the leg 111 lands, the other leg 112 in the advancing direction is lifted, and the leg 112 acquires the obstacle information of the other side in the same manner. Then, the three-dimensional information of the whole front obstacle can be estimated according to the obstacle data of the left side and the right side of the robot.
According to the three-dimensional coordinates of each sampling point obtained by scanning in the width direction, the (x, y, z) coordinate information of all sampling points in the AB line segment and the CD line segment can be obtained. In an embodiment, four boundary points of AB and CD may be directly projected onto the XY plane, and the projected points are denoted as a ', B', C ', and D', and AA ', BB', CC ', and DD' are used as the boundaries on both sides.
In one embodiment, if the obstacle is a predetermined regular solid geometry, the three-dimensional 3D information of the obstacle can be obtained by directly connecting a and a ', B and B', C and C ', D and D', AB, CD, AD, BC through straight lines.
In one embodiment, if the obstacle is a predetermined regular solid geometry, the simplest three-dimensional mapping method is to use AA ', BB', AB and CC ', DD', and CD as two groups, and each line segment expands the number of sampling points, so that the number of sampling points of each pair of line segments (a pair of AA 'and DD', a pair of BB 'and CC', a pair of AB and CD, and three pairs in total) in the two groups of data is equal. And finally, connecting each corresponding data point in each pair of line segments in space by a straight line to obtain the three-dimensional 3D information of the obstacle.
For the case where the shape of the obstacle is completely unknown, as shown in fig. 13, since the distance between AB and CD in the X-axis direction of the robot is known, it can be equally divided on the X-axis. And acquiring the three-dimensional coordinate information on the equant points by using a method for detecting the same three-dimensional coordinates of the sampling points on the AB line segments and the CD line segments. The data (including AB and CD sections) collected by each equally divided point is a group, the number of sampling points is expanded to ensure that the number of corresponding data points among the groups is equal, and each point is numbered in the same group according to the same sequence. Finally, as shown in fig. 14, the data with the same number in each group is fitted to obtain another connecting line at the top of the obstacle, so that the details of the obstacle are enriched.
In one embodiment, the data fitting may be further performed according to three-dimensional coordinates of each sampling point on at least two first tracks on the top surface or the bottom surface of the obstacle and three-dimensional coordinates of each sampling point on at least one second track on the side surface of the obstacle.
As shown in fig. 15, the profiles of the two side surfaces AB and CD can be obtained by fitting the three-dimensional coordinates of the respective sampling points on the at least one second trajectory of the side surface of the obstacle.
Further, after the obtaining the three-dimensional information of the obstacle, the method further includes:
according to the three-dimensional information of the obstacle detected by the multi-legged robot, a first electronic map of the environment where the multi-legged robot is located is established; or the like, or, alternatively,
and determining the current position of the multi-legged robot in the environment according to the three-dimensional information of the obstacle and a second electronic map of the environment where the multi-legged robot is located, wherein the second electronic map is acquired in advance.
Specifically, the three-dimensional information of the obstacle can be obtained through fitting of the space data points, and then a more accurate 3D map can be established. The three-dimensional structure of the obstacle is shown in fig. 15 and 16, wherein fig. 15 shows a convex obstacle, and fig. 16 shows a concave obstacle.
In an embodiment, the current position of the robot can be accurately positioned through the acquired three-dimensional structure of the obstacle and a preset 3D map.
In the above specific embodiment, the sensor module at the end of the leg includes the collision sensor and the single-point TOF ranging sensor, so that the surrounding environment can be flexibly sensed, and the obstacle can be accurately detected. The 3D map can be effectively constructed, and accurate and rich data are provided for path planning and obstacle crossing decisions of the robot.
In one embodiment, the scheme can be applied to cleaning of glass windows of buildings, and the robot can autonomously operate among a plurality of pieces of glass. For a common window, the scheme can realize full glass cleaning coverage, as shown in fig. 17, a1 is the starting position of robot cleaning, a2 is the window body, and a3 is the cleaning task route.
The foot type structure of the robot can span the strip-shaped frame between the window panes, and then the cleaning of the whole window is completed. The cleaning route planning of each pane and the task planning between panes can refer to the scheme in the related art.
In an embodiment, in cooperation with an App of a mobile terminal device such as a mobile phone, a sensor module on a robot can be used to upload and display a 3D map established in a cleaning process on an App end in real time, and the effect is as shown in fig. 18.
In one embodiment provided herein, the multi-legged robot has 5 joints per leg. The 4 joints are active control joints, each active control joint has one rotational degree of freedom and is connected with a respective motion servo module, and the motion servo module comprises a motor and a speed reducer for example; joints at the tail ends of the legs are 1 passive spherical joints, and have three spatial rotation degrees of freedom in mutually orthogonal directions. The leg structure is shown in fig. 19:
the motion servo module 1111 of the first joint is fixedly connected to the robot body, and drives a rotating shaft 1112, wherein the rotating shaft 1112 is perpendicular to the abdominal plane of the robot. The motion servo module 1121 of the second joint drives a rotation shaft 1122, and the rotation shaft 1122 is perpendicular to the rotation shaft 1112. The motion servo module 1131 of the third joint drives a shaft 1132, and the shaft 1132 is parallel to the shaft 1122. The third motion servo module 1141 drives the shaft 1142, and the shaft 1142 is parallel to the shaft 1132. The fifth joint 115 is a passive joint, and a partially enlarged view of the structure of the passive joint is shown in fig. 20, wherein a leg connector 1151 is fixedly connected to the motion servo module 1141, and a suction cup connector 1152 is connected to the suction cup 12. The leg link 1151 is coupled to a suction cup link 1152 (only a portion of the suction cup link 1152 is shown in fig. 20) via a ball-end link 1153, allowing spatial three-dimensional rotation between the two relative to a spherical surface. In one embodiment, a joint return mechanism is provided between the leg attachment member 1151 and the suction cup attachment member 1152, such as a spring 1154 that is disposed around the exterior of the ball-end link 1153 and is coupled to the leg attachment member 1151 and the suction cup attachment member 1152, respectively, to return the passive joint to its original position when no additional force is applied to the suction cup 12.
In one embodiment, a cover 1155 is also provided on the suction cup connector 1152.
In the above specific embodiment, the leg of the robot is an under-actuated structure, so that the robot can realize the functions of plane walking and obstacle crossing on the principle that the structure is simplified as much as possible, and the structure has higher robustness on the control error and the environmental perception error of the motion servo module and strong environmental adaptability.
In one embodiment, the multi-legged robot further has a vacuum suction assembly, the vacuum suction assembly including: the device comprises a vacuum generator, a barometer, an air escape valve, a vacuum buffer tank, a gas pipeline and a sucker. Each leg is provided with the vacuum adsorption components, and four-legged robots are taken as examples and are provided with 4 sets of vacuum adsorption components which can be independently controlled. Wherein the vacuum generator, the barometer, the air escape valve and the vacuum buffer tank are all arranged in the robot body. In one embodiment, the vacuum generator is arranged on the robot main body and connected with the sucker through a gas pipeline and used for controllably pumping gas in the sucker out of vacuum; the buffer tank is arranged between the sucker and the vacuumizing device; the air release valve is arranged on the gas pipeline and is positioned between the suction cup and the buffer tank.
In order to adapt to the smooth surface of glass, ceramic tile and the like, the structure of the sucking disc is shown in figure 21:
the suction cup connection member 1152 is connected to the ball head connection member 1153 and serves as a support frame for the suction cup as a whole, and the end face of the support frame can generate pressure/support force when the suction cup is sucked. The suction nozzle 1156 is connected to an air tube, and the suction lip 1157 is made of an elastic material such as silicone or the like, and has an opening plane extended beyond a supporting end surface of the suction cup connection member 1152. The angle error redundancy is provided, and when a certain included angle is formed between the end face of the sucking disc and the adsorbed plane, the sucking disc can complete the adsorption operation. An anti-slip ring 1158 is mounted to the supporting end surface of the suction cup connection member 1152 to increase friction force upon suction.
The method of the embodiment of the present application may further control the robot through an electronic device to implement the method in the embodiment, for example, the electronic device includes:
a processor, and a memory for storing executable instructions for the processor.
Optionally, the method may further include: and the communication interface is used for realizing communication with other equipment.
The above components may communicate over one or more buses.
The processor is configured to execute the corresponding method in the foregoing method embodiment by executing the executable instruction, and the specific implementation process of the processor may refer to the foregoing method embodiment, which is not described herein again.
The embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the method in the foregoing method embodiment is implemented.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (15)

1. An obstacle detection method applied to a multi-legged robot having a foot provided with a sensor module, the method comprising:
acquiring a detection signal of a sensor module of a foot part of the multi-legged robot in the process that the foot part of the multi-legged robot advances towards a first direction along a first track;
and determining whether an obstacle exists according to the detection signal of the sensor module.
2. The method of claim 1, wherein the sensor module comprises: the TOF ranging sensor is arranged at the bottom of the foot, the impact sensor is arranged at the foot, and the sensor module is used for determining whether an obstacle exists according to a detection signal of the sensor module, and the TOF ranging sensor comprises:
if the strength value of the detection signal of the collision sensor is determined to jump, determining that an obstacle exists; or the like, or, alternatively,
and determining the distance detected by the TOF ranging sensor according to the detection signal of the TOF ranging sensor, and determining that an obstacle exists if the distance detected by the TOF ranging sensor is determined to jump.
3. The method of claim 2, wherein determining that the strength value of the detection signal of the impact sensor has transitioned comprises:
if the difference between the intensity value of the detection signal of the collision sensor at the first moment and the preset intensity value is larger than a first preset threshold value, determining that the intensity value of the detection signal of the collision sensor jumps; or the like, or, alternatively,
and if the difference between the intensity values of the detection signals of the collision sensor at the first moment and the adjacent previous moment is larger than a second preset threshold value, determining that the intensity value of the detection signal of the collision sensor jumps.
4. The method of claim 2, wherein determining that the distance detected by the TOF ranging sensor hops comprises:
if the difference value between the distance detected by the TOF ranging sensor and the preset distance is larger than a third preset threshold value at the second moment, determining that the distance detected by the TOF ranging sensor jumps; or the like, or, alternatively,
and if the difference value between the distance of the second moment and the distance of the adjacent previous moment is larger than a fourth preset threshold value, determining that the distance detected by the TOF ranging sensor jumps.
5. The method according to any one of claims 2-4, wherein after determining that the obstacle is present, further comprising:
controlling the foot to advance along the first trajectory toward the first direction;
in the process that the foot part moves forwards along the first track, if the distance detected by the TOF ranging sensor jumps at the third moment, determining the three-dimensional coordinates of each sampling point of the top surface or the bottom surface of the obstacle on the first track according to the three-dimensional coordinates of the TOF ranging sensor at each sampling point on the first track between the second moment and the third moment and the distance between the TOF ranging sensor and the top surface or the bottom surface of the obstacle between the first moment and the second moment, wherein the three-dimensional coordinates are coordinates of a three-dimensional coordinate system with the body of the multi-legged robot as an origin.
6. The method of claim 5, wherein prior to said controlling said foot to advance along said first trajectory toward said first direction, further comprising:
controlling the foot to advance towards a second direction perpendicular to the first direction along a second track until the distance detected by the TOF ranging sensor jumps at a fourth moment;
and determining the three-dimensional coordinates of each sampling point of the side surface of the obstacle on the second track according to the three-dimensional coordinates of the TOF ranging sensor at each sampling point of the second track and the three-dimensional coordinates of the TOF ranging sensor at the first moment.
7. The method of claim 5, wherein after determining the three-dimensional coordinates of the top or bottom surface of the obstacle at each sample point on the first trajectory, further comprising:
performing data fitting according to the three-dimensional coordinates of each sampling point of the top surface or the bottom surface of the obstacle on at least two first tracks;
and acquiring the three-dimensional information of the obstacle according to the fitted data.
8. The method of claim 7, wherein said fitting data according to the three-dimensional coordinates of each sampling point on at least two of said first trajectories of the top or bottom surface of said obstacle comprises:
and performing data fitting according to the three-dimensional coordinates of each sampling point of the top surface or the bottom surface of the obstacle on at least two first tracks and the three-dimensional coordinates of each sampling point of the side surface of the obstacle on at least one second track.
9. The method according to claim 7 or 8, wherein after the obtaining the three-dimensional information of the obstacle, further comprising:
according to the three-dimensional information of the obstacle detected by the multi-legged robot, a first electronic map of the environment where the multi-legged robot is located is established; or the like, or, alternatively,
and determining the current position of the multi-legged robot in the environment according to the three-dimensional information of the obstacle and a second electronic map of the environment where the multi-legged robot is located, wherein the second electronic map is acquired in advance.
10. The method of claim 5, further comprising:
and determining the height or the depth of the obstacle according to the difference between the distance between the second moment or the third moment and the adjacent previous moment.
11. The method of claim 6, further comprising:
and determining the height of the obstacle according to the three-dimensional coordinate corresponding to the TOF ranging sensor at the fourth time.
12. The method of claim 10 or 11, further comprising:
acquiring the moving distance of the foot along the first track between the second moment and the third moment;
determining the width of the obstacle along the first track according to the distance moved along the first track.
13. The method of claim 12, wherein after determining the width of the obstacle along the first trajectory, further comprising:
and controlling the multi-legged robot to cross the obstacle according to the height and the width of the obstacle.
14. A multi-legged robot, comprising:
the robot comprises a robot body and a plurality of legs;
the robot body is provided with a processor, and the foot part at the tail end of each leg is provided with a sensor module;
the processor is configured to implement the method of any one of claims 1-13.
15. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method of any one of claims 1-13.
CN202010252466.6A 2020-04-01 2020-04-01 Obstacle detection method, robot, and storage medium Pending CN113552589A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010252466.6A CN113552589A (en) 2020-04-01 2020-04-01 Obstacle detection method, robot, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010252466.6A CN113552589A (en) 2020-04-01 2020-04-01 Obstacle detection method, robot, and storage medium

Publications (1)

Publication Number Publication Date
CN113552589A true CN113552589A (en) 2021-10-26

Family

ID=78100857

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010252466.6A Pending CN113552589A (en) 2020-04-01 2020-04-01 Obstacle detection method, robot, and storage medium

Country Status (1)

Country Link
CN (1) CN113552589A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114633826A (en) * 2022-05-19 2022-06-17 深圳鹏行智能研究有限公司 Leg collision processing method for foot type robot and foot type robot
CN114967687A (en) * 2022-05-23 2022-08-30 纯米科技(上海)股份有限公司 Obstacle detection method, system, electronic device and computer-readable storage medium

Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005233615A (en) * 2004-02-17 2005-09-02 Kyosan Electric Mfg Co Ltd Apparatus and method for detecting obstacle
KR100809353B1 (en) * 2006-12-19 2008-03-05 삼성전자주식회사 Method and apparatus for measuring the distance by using radio frequency signal
CN201572040U (en) * 2009-10-09 2010-09-08 泰怡凯电器(苏州)有限公司 Self-moving land disposal robot
CN101855573A (en) * 2008-01-16 2010-10-06 三菱电机株式会社 Dynamic obstacle judgment device
JP2011096170A (en) * 2009-11-02 2011-05-12 Toyota Motor Corp Autonomous mobile device and control method therefor
US20120232696A1 (en) * 2009-10-09 2012-09-13 Ecovacs Robotics (Suzhou) Co., Ltd. Autonomous Moving Floor-Treating Robot and Control Method Thereof for Edge-Following Floor-Treating
CN103534659A (en) * 2010-12-30 2014-01-22 美国iRobot公司 Coverage robot navigation
CN105445812A (en) * 2015-11-12 2016-03-30 青岛海信电器股份有限公司 Human body sensor detection method and human body sensor detection device
CN105676845A (en) * 2016-01-19 2016-06-15 中国人民解放军国防科学技术大学 Security service robot and intelligent obstacle avoidance method of robot in complex environment
CN105835044A (en) * 2016-06-07 2016-08-10 电子科技大学 Exoskeleton robot ranging smart shoe system based on integration of several sensors
CN107544494A (en) * 2017-08-17 2018-01-05 上海美祎科技有限公司 Sweeping robot and its barrier-avoiding method
CN206991119U (en) * 2017-07-20 2018-02-09 思依暄机器人科技(深圳)有限公司 A kind of detection means and telecontrol equipment
CN107943021A (en) * 2017-10-19 2018-04-20 布法罗机器人科技(成都)有限公司 A kind of adaptive stair activity control system and method
CN107928566A (en) * 2017-12-01 2018-04-20 深圳市沃特沃德股份有限公司 Vision sweeping robot and obstacle detection method
CN108209773A (en) * 2018-01-04 2018-06-29 深圳市银星智能科技股份有限公司 The intelligent barrier avoiding method of clean robot and clean robot
CN108256430A (en) * 2017-12-20 2018-07-06 北京理工大学 Obstacle information acquisition methods, device and robot
CN108803588A (en) * 2017-04-28 2018-11-13 深圳乐动机器人有限公司 The control system of robot
CN108873900A (en) * 2018-06-27 2018-11-23 北京航空航天大学 Method, system and the robot to clear the jumps when a kind of robot ambulation
CN109375618A (en) * 2018-09-27 2019-02-22 深圳乐动机器人有限公司 The navigation barrier-avoiding method and terminal device of clean robot
CN109857112A (en) * 2019-02-21 2019-06-07 广东智吉科技有限公司 Obstacle Avoidance and device
CN109872324A (en) * 2019-03-20 2019-06-11 苏州博众机器人有限公司 Ground obstacle detection method, device, equipment and storage medium
CN110503040A (en) * 2019-08-23 2019-11-26 斯坦德机器人(深圳)有限公司 Obstacle detection method and device
WO2020008536A1 (en) * 2018-07-03 2020-01-09 三菱電機株式会社 Obstacle detection device
CN110850885A (en) * 2019-12-20 2020-02-28 深圳市杉川机器人有限公司 Autonomous robot

Patent Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005233615A (en) * 2004-02-17 2005-09-02 Kyosan Electric Mfg Co Ltd Apparatus and method for detecting obstacle
KR100809353B1 (en) * 2006-12-19 2008-03-05 삼성전자주식회사 Method and apparatus for measuring the distance by using radio frequency signal
CN101855573A (en) * 2008-01-16 2010-10-06 三菱电机株式会社 Dynamic obstacle judgment device
CN201572040U (en) * 2009-10-09 2010-09-08 泰怡凯电器(苏州)有限公司 Self-moving land disposal robot
US20120232696A1 (en) * 2009-10-09 2012-09-13 Ecovacs Robotics (Suzhou) Co., Ltd. Autonomous Moving Floor-Treating Robot and Control Method Thereof for Edge-Following Floor-Treating
JP2011096170A (en) * 2009-11-02 2011-05-12 Toyota Motor Corp Autonomous mobile device and control method therefor
CN103534659A (en) * 2010-12-30 2014-01-22 美国iRobot公司 Coverage robot navigation
CN105445812A (en) * 2015-11-12 2016-03-30 青岛海信电器股份有限公司 Human body sensor detection method and human body sensor detection device
CN105676845A (en) * 2016-01-19 2016-06-15 中国人民解放军国防科学技术大学 Security service robot and intelligent obstacle avoidance method of robot in complex environment
CN105835044A (en) * 2016-06-07 2016-08-10 电子科技大学 Exoskeleton robot ranging smart shoe system based on integration of several sensors
CN108803588A (en) * 2017-04-28 2018-11-13 深圳乐动机器人有限公司 The control system of robot
CN206991119U (en) * 2017-07-20 2018-02-09 思依暄机器人科技(深圳)有限公司 A kind of detection means and telecontrol equipment
CN107544494A (en) * 2017-08-17 2018-01-05 上海美祎科技有限公司 Sweeping robot and its barrier-avoiding method
CN107943021A (en) * 2017-10-19 2018-04-20 布法罗机器人科技(成都)有限公司 A kind of adaptive stair activity control system and method
CN107928566A (en) * 2017-12-01 2018-04-20 深圳市沃特沃德股份有限公司 Vision sweeping robot and obstacle detection method
CN108256430A (en) * 2017-12-20 2018-07-06 北京理工大学 Obstacle information acquisition methods, device and robot
CN108209773A (en) * 2018-01-04 2018-06-29 深圳市银星智能科技股份有限公司 The intelligent barrier avoiding method of clean robot and clean robot
CN108873900A (en) * 2018-06-27 2018-11-23 北京航空航天大学 Method, system and the robot to clear the jumps when a kind of robot ambulation
WO2020008536A1 (en) * 2018-07-03 2020-01-09 三菱電機株式会社 Obstacle detection device
CN109375618A (en) * 2018-09-27 2019-02-22 深圳乐动机器人有限公司 The navigation barrier-avoiding method and terminal device of clean robot
CN109857112A (en) * 2019-02-21 2019-06-07 广东智吉科技有限公司 Obstacle Avoidance and device
CN109872324A (en) * 2019-03-20 2019-06-11 苏州博众机器人有限公司 Ground obstacle detection method, device, equipment and storage medium
CN110503040A (en) * 2019-08-23 2019-11-26 斯坦德机器人(深圳)有限公司 Obstacle detection method and device
CN110850885A (en) * 2019-12-20 2020-02-28 深圳市杉川机器人有限公司 Autonomous robot

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114633826A (en) * 2022-05-19 2022-06-17 深圳鹏行智能研究有限公司 Leg collision processing method for foot type robot and foot type robot
CN114967687A (en) * 2022-05-23 2022-08-30 纯米科技(上海)股份有限公司 Obstacle detection method, system, electronic device and computer-readable storage medium

Similar Documents

Publication Publication Date Title
EP2888603B1 (en) Robot positioning system
EP3224003B1 (en) Systems and methods of use of optical odometry sensors in a mobile robot
KR20230050396A (en) Obstacle detection method, device, autonomous walking robot and storage medium
EP3711647A1 (en) Self-propelled vacuum cleaner
CN106537185B (en) Device for detecting obstacles by means of intersecting planes and detection method using said device
US11607094B2 (en) Navigation of autonomous mobile robots
JP2017503267A (en) Autonomous mobile robot
AU2015270607B2 (en) Device for detection of obstacles in a horizontal plane and detection method implementing such a device
CN113552589A (en) Obstacle detection method, robot, and storage medium
CN113475977B (en) Robot path planning method and device and robot
CN113841098A (en) Detecting objects using line arrays
CN113494916A (en) Map construction method and multi-legged robot
CN112423639B (en) Autonomous walking type dust collector
US11960296B2 (en) Method and apparatus for autonomous mobile device
Heppner et al. Enhancing sensor capabilities of walking robots through cooperative exploration with aerial robots
CN112493926B (en) A robot of sweeping floor for scanning furniture bottom profile
EP4191360A1 (en) Distance measurement device and robotic vacuum cleaner
TW201825036A (en) Method for operating an automatically moving cleaning device and cleaning device of this type
KR100904769B1 (en) Detecting device of obstacle and method thereof
WO2022156260A1 (en) Autonomous mobile device
CN114019951B (en) Robot control method and device, robot and readable storage medium
US20230225580A1 (en) Robot cleaner and robot cleaner control method
JP7434943B2 (en) Self-position control system and self-position control method
EP4349234A1 (en) Self-moving device
CN114994696A (en) Integrated sensor and cleaning device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination