CN115145273A - Obstacle avoidance control method, robot and computer-readable storage medium - Google Patents

Obstacle avoidance control method, robot and computer-readable storage medium Download PDF

Info

Publication number
CN115145273A
CN115145273A CN202210713398.8A CN202210713398A CN115145273A CN 115145273 A CN115145273 A CN 115145273A CN 202210713398 A CN202210713398 A CN 202210713398A CN 115145273 A CN115145273 A CN 115145273A
Authority
CN
China
Prior art keywords
robot
area
control method
sensor
blind area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210713398.8A
Other languages
Chinese (zh)
Inventor
董济铭
何林
唐旋来
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Keenlon Intelligent Technology Co Ltd
Original Assignee
Shanghai Keenlon Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Keenlon Intelligent Technology Co Ltd filed Critical Shanghai Keenlon Intelligent Technology Co Ltd
Priority to CN202210713398.8A priority Critical patent/CN115145273A/en
Publication of CN115145273A publication Critical patent/CN115145273A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0242Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using non-visible light signals, e.g. IR or UV signals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0255Control of position or course in two dimensions specially adapted to land vehicles using acoustic signals, e.g. ultra-sonic singals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Optics & Photonics (AREA)
  • Acoustics & Sound (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention provides an obstacle avoidance control method for a robot, the robot and a computer readable storage medium. The robot comprises a blind area, a reflecting area, a robot control method and a control system, wherein the reflecting area is arranged outside the blind area and can receive light beams of objects from the blind area, the reference information of the reflecting area is stored in the robot in advance, and the robot control method comprises the following steps: when the robot moves to a position close to the blind area, acquiring scanning information of the reflecting area by using a sensor of the robot; acquiring a preset type of detection parameter according to the scanning information of the reflection area; comparing the detection parameters of the preset type with the reference information of the reflecting area; and controlling the robot to move according to the comparison result. The embodiment of the application utilizes the sensor to acquire the scanning information of the reflecting area, acquires the barrier information in the field of vision of the blind area of the sensor through the reflecting area, compares the barrier information with the reference information of the reflecting area, judges whether a barrier exists in the blind area, controls the robot to make a corresponding barrier avoiding action, can reduce the risk of collision and improve the reliability of the operation of the robot.

Description

Obstacle avoidance control method, robot and computer-readable storage medium
Technical Field
The present invention generally relates to the field of intelligent device control technologies, and in particular, to an obstacle avoidance control method, a robot, and a computer-readable storage medium.
Background
Various sensors are generally integrated on the robot to acquire information around the robot in real time, for example, a laser radar on the robot is used to scan the environment around the robot, find obstacles in advance, especially obstacles in motion, and control the robot to avoid. However, in an actual application scene, the robot has a certain sensing blind area due to influence factors such as a sensor view field range, the robot cannot find obstacles in the blind area in time, and the risk of collision still exists.
The statements in the background section are merely prior art as they are known to the inventors and do not, of course, represent prior art in the field.
Disclosure of Invention
In view of one or more defects in the prior art, the present invention provides an obstacle avoidance control method for a robot, in which a reflection area is disposed outside a blind area, and the reflection area can receive a light beam of an object from the blind area, the robot stores reflection area reference information in advance, and the robot control method includes:
when the robot moves to a position close to the blind area, acquiring scanning information of the reflecting area by using a sensor of the robot;
acquiring a preset type of detection parameter according to the scanning information of the reflection area;
comparing the detection parameters of the preset type with the reference information of the reflecting area;
and controlling the robot to move according to the comparison result.
According to an aspect of the present invention, wherein the controlling of the movement of the robot according to the comparison result comprises:
judging whether the error between the detection parameter of the preset type and the reference information of the reflecting area exceeds a threshold value or not;
and when the error does not exceed the threshold value, controlling the robot to continuously move to pass through the blind area.
According to an aspect of the invention, the robot is controlled to reduce the speed of movement when the error therein exceeds a threshold value.
According to an aspect of the present invention, wherein a waiting area is provided between the reflection area and the blind area, the robot control method further comprises: when the error therein exceeds a threshold value,
controlling the robot to move to the waiting area;
in the waiting area, the orientation of the robot is adjusted to detect the blind area.
According to one aspect of the invention, the blind area is generated by a sensor of a fixed obstacle sheltering robot near a corner, the reflection area is arranged at the far end of the corner, and the sensor of the robot can expand the visual field range by using the reflection area.
According to an aspect of the present invention, in the case that the reference information of the reflection area includes that no dynamic obstacle exists in the blind area, the robot acquires the scanning information of the corresponding position of the reflection area by using a sensor; the robot control method further includes: and pre-storing the coordinates of the blind area and the corresponding reference information of the reflecting area in a map of the moving range of the robot.
According to an aspect of the invention, wherein the sensor of the robot is a vision sensor and/or a distance sensor, the reflection area has a reflection surface.
According to an aspect of the invention, wherein the reflection area comprises a flat mirror or a convex mirror, wherein the reflection area reference information comprises an image or a reflectivity.
According to one aspect of the invention, the invention also includes a robot comprising:
a main body;
the movement device is arranged on the main body and can be driven to drive the main body to move;
a sensor disposed on the body; and
a control system in communication with the sensor and the motion device and configured to be capable of performing the obstacle avoidance control method as previously described.
According to one aspect of the present invention, the present invention further includes a computer-readable storage medium including computer-executable instructions stored thereon, which when executed by a processor implement the obstacle avoidance control method as described above.
Compared with the prior art, the embodiment of the application provides an obstacle avoidance control method applied to a robot, a reflection area is arranged outside a blind area, scanning information of the reflection area is obtained by a sensor in the robot, obstacle information in the field of view of the blind area of the sensor is obtained through the reflection area, the obstacle information is compared with reference information of the reflection area, whether an obstacle exists in the blind area or not is judged, the robot is controlled to perform corresponding obstacle avoidance actions, the risk of collision can be reduced, and the running reliability of the robot is improved. The invention also comprises a robot embodiment and a computer readable storage medium embodiment, which can cooperate to execute the above obstacle avoidance control method.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
fig. 1 is a schematic flow chart of an obstacle avoidance control method according to an embodiment of the present invention;
FIG. 2 is a schematic illustration of a robot reflection zone in one embodiment of the invention;
FIG. 3 is a schematic illustration of a blind spot of a robot in the prior art;
FIG. 4 is a schematic view of the reflective region in the case of a wrap-around corner in one embodiment of the present invention;
FIG. 5 is a schematic flow chart of an obstacle avoidance control method including comparing detection parameters with reference information of a reflection area in an embodiment of the present invention;
fig. 6 is a block diagram of a robot according to an embodiment of the present invention.
Detailed Description
In the following, only certain exemplary embodiments are briefly described. As those skilled in the art will recognize, the described embodiments may be modified in various different ways, all without departing from the spirit or scope of the present invention. Accordingly, the drawings and description are to be regarded as illustrative in nature, and not as restrictive.
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise", "counterclockwise", and the like, indicate orientations or positional relationships based on those shown in the drawings, merely for convenience of description and simplification of the description, and do not indicate or imply that the device or element referred to must have a particular orientation, be constructed in a particular orientation, and be operated, and thus, are not to be construed as limiting the present invention. Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, features defined as "first", "second", may explicitly or implicitly include one or more of the described features. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
In the description of the present invention, it should be noted that unless otherwise explicitly stated or limited, the terms "mounted," "connected" and "connected" are to be construed broadly, e.g., as being fixed or detachable or integral, either mechanically, electrically or communicatively coupled; either directly or indirectly through intervening media, either internally or in any other relationship. The specific meanings of the above terms in the present invention can be understood according to specific situations by those of ordinary skill in the art.
In the present invention, unless otherwise expressly stated or limited, "above" or "below" a first feature means that the first and second features are in direct contact, or that the first and second features are not in direct contact but are in contact with each other via another feature therebetween. Also, the first feature being "on," "above" and "over" the second feature includes the first feature being directly on and obliquely above the second feature, or merely indicating that the first feature is at a higher level than the second feature. A first feature being "under," "below," and "beneath" a second feature includes the first feature being directly above and obliquely above the second feature, or simply meaning that the first feature is at a lesser level than the second feature.
The following disclosure provides many different embodiments or examples for implementing different features of the invention. To simplify the disclosure of the present invention, the components and arrangements of specific examples are described below. Of course, they are merely examples and are not intended to limit the present invention. Furthermore, the present invention may repeat reference numerals and/or letters in the various examples, such repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed. In addition, the present invention provides examples of various specific processes and materials, but one of ordinary skill in the art may recognize applications of other processes and/or uses of other materials.
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
Fig. 1 shows a specific flow of an obstacle avoidance control method 100 applied to a robot according to an embodiment of the present invention, and fig. 2 shows a schematic diagram of the robot and a blind area distribution according to an embodiment of the present invention, which is described in detail below with reference to fig. 1 and fig. 2.
A sensor for acquiring obstacle information on a moving path in a robot is generally disposed at a front side of a moving direction of the robot, such as a top or a bottom of the robot, and scans toward a front within a certain range to acquire obstacle information on the moving path of the robot.
In the range of motion of the robot, there is a blind area due to environmental factors, where the blind area is an area where the sensor of the robot cannot obtain or cannot accurately obtain the scanning information due to the shielding of obstacles, the steering obstacle of the robot, and other reasons, and specifically, for example, the sensor of the robot is shielded by fixed obstacles, decorations, wall structures, and the like, or the robot cannot rotate to a specific angle due to environmental restrictions, so that the sensor can obtain the scanning information in a specific direction. The blind area causes scanning information obtained by the robot sensor to be lost, and judgment of a motion result by the robot is influenced, for example, the robot moves towards the blind area and may collide with an obstacle in the blind area. In this embodiment, a reflection area is disposed outside the blind area, and the reflection area can receive a light beam from an object in the blind area, including reflected light.
Specifically, in the moving range of the robot, the fixed obstacles in the position may be marked in the constructed map, for example, fixed obstacles such as walls, shelves, tables and chairs may be marked on the map in the specific position and size pose of the obstacle, and when the robot is controlled to move, the fixed obstacles may be avoided by planning the moving path of the robot. However, for an obstacle with an unfixed position, such as a person moving in the moving range of the robot or other moving robots, or a temporary obstacle caused by an artificial person, it is necessary to acquire information of the obstacle by a sensor on the robot and control the robot to perform a corresponding avoidance action according to the position, size and movement of the obstacle. Under the condition that does not have the sheltering from, can guarantee that the robot can in time discover the barrier through setting up suitable sensor scanning scope, make and dodge, prevent the collision accident. However, in the case shown in fig. 2 and 3, due to the fixed obstacle or the shielding of the wall structure, a blind area may occur in the scanning range of the sensor, in this case, the robot cannot find the obstacle in time, and there is a risk of collision accident when the robot is controlled to continue to operate normally, especially when there is an obstacle in the blind area where the object moves, such as the case shown in fig. 3.
As shown in fig. 2, in this embodiment, a reflection area is disposed outside a blind area in a moving range of the robot, and meanwhile, reference information of the reflection area is stored in advance in the robot, specifically, the reference information of the reflection area can control a sensor on the robot to acquire scanning information at a corresponding position of the reflection area when no obstacle exists in the blind area, and record the scanning information of the reflection area, for example, record one or more of image information such as point cloud information and feature images, or optical information such as reflectivity and reflection distance. According to the preferred embodiment of the invention, when the robot moves to the position close to the blind area, the reflection area is located in the scanning range of the robot sensor, and the sensor can be used for acquiring the corresponding scanning information at the position of the reflection area. Specifically, the distance measurement value may be selected according to different sensor types, for example, a laser radar, a distance sensor, and the like, and when the scanning information of the reflection area is obtained, the distance measurement value may be obtained and recorded under the condition that no obstacle exists in the blind area. For image sensors for binocular stereo vision and the like, when scanning information of a reflecting area is acquired, a characteristic image can be acquired and recorded under the condition that no obstacle exists in a blind area.
In the obstacle avoidance control method 100 of the present embodiment, in step S101, when the robot moves to approach the blind area, the sensor of the robot is used to acquire the scanning information of the reflection area. Specifically, when the robot moves to a position close to the blind area, for example, a preset position is set at a position of the blind area marked in a map, and when the robot reaches a specified position, the reflection area is located within a scanning range of the sensor, and the scanning information at the position corresponding to the reflection area is acquired by the sensor by retrieving the position information of the reflection area corresponding to the blind area. For the blind areas caused by fixed obstacles, walls or building structures, the blind areas can be marked in a map, and correspondingly, the reflection areas corresponding to the blind areas also have preset positions, such as specific coordinates. Since the range of the blind area is related to the current position of the robot, when the blind area is determined, the determination can be performed according to the information obtained by the sensor on the robot in real time, for example, in the case shown in fig. 3, in the corner position, the view of the sensor of the robot is blocked by the wall, and a part of the view range is missing, and the information is fed back to the robot, so that the robot can know that the view blind area exists in front of the motion path of the robot, which meets the condition in this step. According to a preferred embodiment of the present invention, the coordinates of the blind area may be stored in a map of the robot's range of motion and correspond to the reflecting area reference information. In the range of motion of the robot, there may be a plurality of blind areas in different positions simultaneously, and correspondingly, there is also a difference in the reflection area reference information corresponding to the blind areas. And as shown in fig. 2 and 3, for the same corner position, the moving directions of the robot are opposite, and the corresponding reflecting area reference information is also different, so according to the preferred embodiment of the present invention, the coordinates of the blind area and the reflecting area reference information have corresponding meanings, and the coordinates of the blind area include the moving direction of the robot, for example, the robot moves in the moving direction shown in fig. 2, the coordinates of the blind area can be set to include the vertex of the corner near end and the moving direction of the robot, and correspond to the reflecting area reference information, specifically, the corner near end refers to the inner side position of the turning movement of the robot.
In step S102, a preset type of detection parameter is obtained according to the scanning result of the reflection area, specifically, the preset type of detection parameter corresponds to the reflection area reference information, for example, the reflection area reference information stored in advance by the robot includes the reflection distance of the reflection area. In the case that the reference information of the reflection area is the characteristic image information, in this step, the scanning information of the reflection area may be obtained by using the sensor to obtain the characteristic image information. In step S103, the detection parameters of the preset type are compared with the reference information of the reflection area, and further, the same type is compared, and when a plurality of blind areas exist in the moving range of the robot, in this step, the reference information of the reflection area corresponding to the coordinates of the current blind area is retrieved. In step S104, the robot is controlled to move according to the comparison result between the preset type detection parameter and the reflective area reference information.
For blind areas caused by fixed obstacles or building structures at the corners, as shown in fig. 2, the reflecting area is arranged at the far end of the corner, i.e. in the direction outside the corner in the path of the robot movement, and the obstacle causing the blind area is located at the near end of the corner. When the reflection area is not set, the blind area of the robot is shown as a shaded area in fig. 3, the obstacle information in the blind area cannot be obtained by the robot, and the robot is controlled to continue moving and turn, so that collision may occur. After the reflection area is set, the obstacle avoidance control method 100 in the embodiment controls the robot, and the visual field range of the sensor in the robot is as shown in fig. 2, which is greatly increased compared with fig. 3. For the corner with a larger angle, even in the case of a return-type corner, as shown in fig. 4, the reflecting region is also arranged at the far end of the corner, or the reflecting regions are arranged at both far ends of the wall structure, so that the visual field range of the robot can be enlarged, and the risk of collision accidents is reduced.
According to a preferred embodiment of the present invention, the reflective area is configured according to a working principle of a sensor of the robot, for example, the sensor of the robot is a vision sensor and/or a distance sensor, wherein the vision sensor acquires image information by using an optical imaging principle, and the distance sensor acquires a distance from the sensor to the reflective surface by using a round trip time or a phase change of a laser beam, for example, and accordingly, the reflective area may be configured with an optical reflective surface, specifically, according to a preferred embodiment of the present invention, the reflective area may be a plane mirror or a convex mirror, wherein the convex mirror is configured to expand a scanning range of the sensor, further reduce a dead zone of the sensor, and reduce a risk of collision.
Fig. 5 shows a specific flow of the obstacle avoidance control method 200 in a preferred embodiment of the present invention, which includes a process of comparing the detection parameters with the reference information of the reflection area, and is described in detail below with reference to fig. 5.
In the obstacle avoidance control method 200, steps S201 and S202 are substantially the same as steps S101 and S102 in the obstacle avoidance control method 100, respectively, and are not described herein again. In step S203, it is determined whether the error between the detection parameter of the preset type and the reflection area reference information exceeds the threshold, similar to the foregoing embodiment, the detection parameter of the preset type and the reflection area reference information are the same type of parameter, for example, the same type of reflection distance or the same type of specific information such as the reflectivity, so as to directly compare the error. Preferably, the reflection area reference information may include a plurality of types of parameters, and when step S203 is executed, the reflection area reference information of the same type as the preset type of detection parameters may be directly called.
The threshold value for the error between the preset type of detection parameter and the reference information of the reflection area is set according to different parameter types, wherein the parameter types with specific values of reflection distance, reflectivity and the like can be set according to field test.
The parameters without specific values, such as the characteristic image, may be set manually, and in the case of the characteristic image, the reflection area reference information may be, for example, a feature position marked in the image information obtained after the scanning information of the reflection area is obtained by the robot sensor, such as a feature pattern on the ground or on the wall, a marker located in a blind area, a feature light source that can be identified, and the like. In the actual operation process of the robot, when the robot moves to a position close to a blind area, the sensor obtains scanning information of the reflecting area and then obtains a characteristic image, the number and the relative position relation of the characteristic objects in the characteristic image can be compared, when an obstacle exists in the range of the blind area, the characteristic objects in the characteristic image obtained by the sensor are lost or distorted, the characteristic image detected by the sensor can be compared with the characteristic image in the reference information of the reflecting area, and a threshold value is set according to the matching degree to judge whether the obstacle exists in the blind area.
In a specific embodiment, the position of the robot when the robot moves to the proximity blind area every time may be different, the relative position relationship with the reflection area may change, and meanwhile, the pose of the robot may also have slight differences, wherein the angle orientation of the sensor may also affect the detection parameters, and the detection parameters corresponding to the reflection area may also change accordingly, so that when the detection parameters are compared with the reflection area reference information, a certain deviation may be generated, which affects the comparison result, wherein the influence on the detection parameters such as the feature image and the point cloud information is larger, but such detection parameters may generally reflect the obstacle information more comprehensively, and the position of the obstacle can be judged more intuitively. In step S204, when the error does not exceed the threshold, the robot is controlled to continue moving to pass through the blind zone.
According to the preferred embodiment of the present invention, as shown in fig. 5, in step S205, when the error between the detection parameter and the reflective area reference information exceeds a threshold value, the robot is controlled to decrease the movement speed. When the error exceeds the threshold value, obstacles may exist in the blind area, the robot is controlled to reduce the movement speed, and the influence on the stability of the robot caused by collision or braking with a large acceleration is avoided. Further, in a preferred embodiment of the present invention, a waiting area is further disposed between the reflection area and the blind area, as shown in fig. 2, the waiting area is disposed at the intersection position of the corner, and when the robot is located in the waiting area, the scanning information in the range of the original blind area can be obtained by using the sensor by controlling the rotation of the robot. When the error between the preset type of detection parameter and the reflecting area reference information exceeds a threshold value, it is determined that there is a possible obstacle in the blind area, if the robot is controlled to decelerate to stop before the blind area, collision can be avoided, but there is a possibility of erroneous determination, or two robots moving in two directions exist at the same time, and the control method of stopping may cause that two robots both stop before the blind area, for example, at two ends of a corner in fig. 2, and cannot continue to operate due to the detection of the obstacle, which affects the working efficiency.
In the preferred embodiment of the present invention, when the error between the detection parameter and the reference information of the reflection area exceeds the threshold value, the robot is controlled to reduce the movement speed, and in step S206, the robot is controlled to move to the waiting area, in this embodiment, the waiting area is set between the reflection area and the blind area, in step S207, the orientation of the robot is adjusted in the waiting area to detect the blind area, for example, the robot is controlled to rotate in place in the waiting area, so that the sensor faces the original blind area (the blind area disappears in the waiting area), and it is able to accurately obtain the scanning information in the blind area and complete the steering process. When the barrier in the blind area is a movable barrier, such as a walking person, the robot is controlled to wait for the position of the movable barrier in the blind area to pass through the corner in the waiting area. When the obstacle in the blind area is a fixed obstacle temporarily placed, such as a table, a chair and the like artificially placed, the robot can be controlled to plan an avoidance route according to the position of the obstacle so as to bypass the obstacle, and certainly, the robot can be controlled to give an alarm or plan a route again for the obstacle which cannot pass through, and the other route reaches the destination.
The present invention also includes an embodiment of the robot 1, as shown in fig. 6, in which the robot 1 includes a main body, a moving device 10, a sensor 20, and a control system 30, wherein the main body is a main structural frame of the robot 1, each component in the robot 1 is fixed by the main body, the main body may be made of an alloy material or an organic material to form a fixed shape, and each component in the robot 1 is installed at a corresponding position in the main body. The movement device 10 is disposed on the main body and can be driven to drive the robot 1 to move, preferably, the movement device 10 is of a wheel structure, for example, a universal wheel driven by a motor, so as to ensure that the robot 1 is always parallel to the ground when moving, ensure that the robot 1 stably runs, and enable the robot to complete specific tasks of distribution, transportation, cleaning and the like.
The sensor 20 is fixedly provided on the main body, has a fixed constraint relationship with the main body, and can scan the range in front of the movement path of the robot 1 to acquire image information, distance information, point cloud data, and the like. According to various embodiments, the number of sensors 20 mounted on the robot 1 is preferably plural to comprehensively acquire environmental information around the robot 1. In particular, the sensor 20 may be a laser range finder device, a lidar, an RGB camera, a depth camera, or the like.
The control system 30 is disposed on the main body and is in communication with the exercise device 10 and the sensor 20, for example, the control system 30 is a processor, and is in communication with the exercise device 10 and the sensor 20 through data lines or wireless communication, and can control the exercise device 10 and the sensor 20 to operate and receive the obstacle information acquired by the sensor 20. The control system 30 in this embodiment can execute the obstacle avoidance control method in the foregoing embodiment, and is used to avoid a collision accident when the robot 1 moves to a blind area accessory.
According to a preferred embodiment of the present invention, in particular, the robot includes a housing carrying an article, a mobile chassis, a function controller providing user operations, an underlying controller of map generation and path planning, and an element controller controlling a mobile unit and an environment detection unit. The movable chassis is provided with at least two groups of driving wheels, and each group of driving wheels is respectively positioned on one side of the movable chassis. The element controller controls the traveling speed of the driving wheel.
The chassis bottom can also be provided with at least one steering lamp unit, and each steering lamp unit comprises at least one steering lamp. The mobile unit is provided with at least two groups of driving wheels, and each group of driving wheels is respectively positioned on one side of the chassis. The element controller controls a traveling speed of the driving wheels, and controls turn lights in the turn light unit to be turned on in a preset manner when the robot turns.
Specifically, the moving unit is provided with at least one set of driving wheels serving as a left driving wheel, and at the same time, at least one set of driving wheels serving as a right driving wheel, the left driving wheel and the right driving wheel being located on opposite sides of the chassis. Optionally, the moving unit may further include at least two sets of driven wheels, one set of driving wheels corresponds to one set of driven wheels, where at least one set of driven wheels is used as a left driven wheel, and at least one set of driven wheels is used as a right driven wheel, and the left driven wheel and the right driven wheel are used to assist the left driving wheel and the right driving wheel to drive the housing and the chassis of the robot to move, so as to reduce load pressure of the driving wheels.
In the embodiment of the invention, the turn light can be controlled to be turned on, the reflection of the reflection area is used as a characteristic object, and when detection parameters such as characteristic pictures are compared, the projection of the turn light in the reflection area can be referred to as a reference for judging whether other robot obstacles exist.
On the basis of the above technical solution, optionally, when the speed difference between the driving wheels on the two sides of the chassis is greater than a preset value, the component controller controls the turn lights in the turn light unit to be turned on according to a preset mode.
In the embodiment of the present invention, the robot path planning method may be applied to any robot, and specifically, a positioning map of a current working area is determined, where the positioning map is a map formed by mapping an environment where the robot is located. In different embodiments of the present invention, the robot may be a delivery robot, such as a meal delivery robot in a dining room, a transfer robot in a warehousing environment, or a cleaning robot in a home and business environment, etc. Specifically, the robot is configured with sensors and a modeling processor that models through environmental data collected by the sensors to build an environmental map. In a specific embodiment, the sensors include a laser radar, an ultrasonic sensor and an infrared sensor, the laser radar, the ultrasonic sensor and the infrared sensor are used for collecting data of a working area where the robot is located, the modeling processor is used for creating a map by using the data collected by the sensors, different map layers are generated by different sensors in the process of creating the map, for example, a static map layer, a dynamic obstacle map layer, an ultrasonic map layer, a visual map layer and the like, and the map layers are fused to obtain a positioning map for positioning and navigating the robot.
When the robot is controlled to move to a certain destination, for example, food is delivered to a certain preset point in the moving range of the robot, and a path is planned according to the positioning map.
Further, the current position and the target position of the robot are determined according to the positioning map. The robot comprises a positioning map, a sensor, a positioning map and a robot, wherein fixed obstacles such as tables and chairs, furniture, goods shelves or building structures or fixed structures playing a decoration role exist in the moving range of the robot, the obstacles at fixed positions can be acquired by the sensor and marked in the process of constructing the map, and the positions of the obstacles are determined according to the positioning map when a path is planned. And planning a path according to the current position, the target position and the obstacle position.
Specifically, the target position is a position set by a user or a position to be moved determined by a processing system of the robot, wherein the target position may be a position to be moved next determined in the moving process or a position to be reached by the robot finally.
The position of the obstacle on the map can be located through the positioning map. By the embodiment, the robot can determine the position of the obstacle and plan a route on the premise of not changing navigation accuracy. The current position is real-time position information of the robot determined by the robot through the position sensor. However, movable obstacles or fixed obstacles temporarily added by people cannot be reflected in a positioning map in real time, in the normal movement process of the robot, the range in front of the movement of the robot can be scanned through a sensor to obtain obstacle information, but collision accidents still possibly occur due to the existence of a blind area, and whether obstacles exist in the blind area can be judged by using the reflection area by using the obstacle avoidance control method in the embodiment.
According to a preferred embodiment of the present invention, the present invention further includes a computer-readable storage medium, which includes computer-executable commands stored thereon, and when executed by a processor, the computer-executable commands implement the obstacle avoidance control method as described above.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that changes may be made in the embodiments and/or equivalents thereof without departing from the spirit and scope of the invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. An obstacle avoidance control method for a robot, which is provided with a reflection area outside a blind area and which can receive a light beam of an object from the blind area, the robot having reflection area reference information stored therein in advance, the robot control method comprising:
when the robot moves to a position close to the blind area, acquiring scanning information of the reflecting area by using a sensor of the robot;
acquiring a preset type of detection parameter according to the scanning information of the reflection area;
comparing the detection parameters of the preset type with the reference information of the reflecting area;
and controlling the robot to move according to the comparison result.
2. The obstacle avoidance control method according to claim 1, wherein the step of controlling the robot to move according to the comparison result includes:
judging whether the error between the detection parameter of the preset type and the reference information of the reflecting area exceeds a threshold value or not;
and when the error does not exceed the threshold value, controlling the robot to continuously move to pass through the blind area.
3. The obstacle avoidance control method according to claim 2, wherein when the error exceeds a threshold value, the robot is controlled to reduce the movement speed.
4. The obstacle avoidance control method according to claim 3, wherein a waiting area is provided between the reflection area and the blind area, the robot control method further comprising: when the error therein exceeds a threshold value,
controlling the robot to move to the waiting area;
in the waiting area, the orientation of the robot is adjusted to detect the blind area.
5. The obstacle avoidance control method according to any one of claims 1 to 4, wherein the blind area is generated by a sensor of a robot sheltered from a fixed obstacle near a corner, the reflection area is provided at a far end of the corner, and the sensor of the robot can expand a visual field range by using the reflection area.
6. The obstacle avoidance control method according to any one of claims 1 to 4, wherein the reference information of the reflection area includes scanning information of a position corresponding to the reflection area obtained by a robot using a sensor when no dynamic obstacle exists in the blind area; the robot control method further includes: and pre-storing the coordinates of the blind area and the corresponding reference information of the reflecting area in a map of the moving range of the robot.
7. The obstacle avoidance control method according to any one of claims 1 to 4, wherein the sensor of the robot is a vision sensor and/or a distance sensor, and the reflection area has a reflection surface.
8. The obstacle avoidance control method according to any one of claims 1 to 4, wherein the reflection area includes a plane mirror or a convex mirror, wherein the reflection area reference information includes an image or a reflectivity.
9. A robot, comprising:
a main body;
the movement device is arranged on the main body and can be driven to drive the main body to move;
a sensor disposed on the body; and
a control system in communication with the sensor and the motion device and configured to be capable of performing the obstacle avoidance control method of any of claims 1-8.
10. A computer-readable storage medium comprising computer-executable instructions stored thereon, which when executed by a processor implement the obstacle avoidance control method of any of claims 1-8.
CN202210713398.8A 2022-06-22 2022-06-22 Obstacle avoidance control method, robot and computer-readable storage medium Pending CN115145273A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210713398.8A CN115145273A (en) 2022-06-22 2022-06-22 Obstacle avoidance control method, robot and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210713398.8A CN115145273A (en) 2022-06-22 2022-06-22 Obstacle avoidance control method, robot and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN115145273A true CN115145273A (en) 2022-10-04

Family

ID=83408751

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210713398.8A Pending CN115145273A (en) 2022-06-22 2022-06-22 Obstacle avoidance control method, robot and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN115145273A (en)

Similar Documents

Publication Publication Date Title
JP6211734B1 (en) Combination of stereo processing and structured light processing
US20200264616A1 (en) Location estimation system and mobile body comprising location estimation system
US8679260B2 (en) Methods and systems for movement of an automatic cleaning device using video signal
US20070177011A1 (en) Movement control system
US20200110410A1 (en) Device and method for processing map data used for self-position estimation, mobile body, and control system for mobile body
JP6825712B2 (en) Mobiles, position estimators, and computer programs
AU2017201641A1 (en) Inspection system and method for performing inspections in a storage facility
US20110112714A1 (en) Methods and systems for movement of robotic device using video signal
WO2021120999A1 (en) Autonomous robot
JPWO2019026761A1 (en) Mobile and computer programs
WO2019187816A1 (en) Mobile body and mobile body system
JP2000009422A (en) Distance measuring apparatus for traveling vehicle
JPWO2019054209A1 (en) Map making system and map making device
US11537140B2 (en) Mobile body, location estimation device, and computer program
JP2014067223A (en) Autonomous mobile body
JP7318892B2 (en) self-driving forklift
CN111781929A (en) AGV trolley and 3D laser radar positioning and navigation method
CN115008465A (en) Robot control method, robot, and computer-readable storage medium
JP2017032329A (en) Obstacle determination device, mobile body, and obstacle determination method
Bostelman et al. Towards AGV safety and navigation advancement obstacle detection using a TOF range camera
CN115145273A (en) Obstacle avoidance control method, robot and computer-readable storage medium
US20220095871A1 (en) Systems and methods for enabling navigation in environments with dynamic objects
CN114413903A (en) Positioning method for multiple robots, robot distribution system, and computer-readable storage medium
Evans et al. Three dimensional data capture in indoor environments for autonomous navigation
CN116848039A (en) Method and device for operating a parking assistance system, parking garage and vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination