CN114415659B - Robot safety obstacle avoidance method and device, robot and storage medium - Google Patents

Robot safety obstacle avoidance method and device, robot and storage medium Download PDF

Info

Publication number
CN114415659B
CN114415659B CN202111522716.4A CN202111522716A CN114415659B CN 114415659 B CN114415659 B CN 114415659B CN 202111522716 A CN202111522716 A CN 202111522716A CN 114415659 B CN114415659 B CN 114415659B
Authority
CN
China
Prior art keywords
information
obstacle
robot
change map
static change
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111522716.4A
Other languages
Chinese (zh)
Other versions
CN114415659A (en
Inventor
李涛
刘德政
王宗文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yantai Jereh Oilfield Services Group Co Ltd
Original Assignee
Yantai Jereh Oilfield Services Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yantai Jereh Oilfield Services Group Co Ltd filed Critical Yantai Jereh Oilfield Services Group Co Ltd
Priority to CN202111522716.4A priority Critical patent/CN114415659B/en
Publication of CN114415659A publication Critical patent/CN114415659A/en
Application granted granted Critical
Publication of CN114415659B publication Critical patent/CN114415659B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0253Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0255Control of position or course in two dimensions specially adapted to land vehicles using acoustic signals, e.g. ultra-sonic singals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Acoustics & Sound (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The application relates to a robot safety obstacle avoidance method, a device, a robot and a storage medium, and relates to the technical field of robots, wherein the robot safety obstacle avoidance method comprises the following steps: identifying obstacle information of a first area through first detection equipment of the robot, updating a static change map of the robot according to the target obstacle information to obtain a first static change map, updating the first static change map according to the first detection information and auxiliary detection information to obtain a second static change map, wherein the first detection information is information detected by the first detection equipment, the auxiliary detection information is first ground distance information detected by the auxiliary detection equipment in the robot, updating the second static change map according to the second ground distance information detected by the auxiliary detection equipment to obtain a third static change map, and performing path planning according to the third static change map to obtain travel path information of the robot.

Description

Robot safety obstacle avoidance method and device, robot and storage medium
Technical Field
The application relates to the technical field of robots, in particular to a robot safety obstacle avoidance method, a robot safety obstacle avoidance device, a robot and a storage medium.
Background
At present, the navigation obstacle avoidance method of the indoor robot mainly carries out obstacle detection through a laser radar so as to carry out path planning according to the detected obstacle, thereby realizing navigation obstacle avoidance.
In a specific implementation, the lidar is mainly divided into a three-dimensional (3D) lidar and a two-dimensional (2D) lidar; the 2D laser radar can only detect the obstacle on the plane where the radar is located, and the obstacle lower than the plane cannot be detected; the 3D lidar can acquire spatial obstacle information, but is very expensive. However, the indoor robot cannot detect the specular reflection object and the transparent object no matter the indoor robot uses the 2D laser radar or the 3D laser radar, and has great potential safety hazard. In order to solve the safety problem caused by using only a laser radar for obstacle detection, the existing indoor robot uses a depth camera to cooperate with a 2D laser radar for navigation obstacle avoidance, but because the depth camera is greatly influenced by light, the indoor robot is difficult to identify obstacles through the depth camera when entering an environment with dark light, and specular reflection objects and transparent objects cannot be detected.
Therefore, the navigation obstacle avoidance method of the indoor robot in the prior art has great potential safety hazard.
Disclosure of Invention
In order to solve the technical problems or at least partially solve the technical problems, the application provides a robot safety obstacle avoidance method, a robot safety obstacle avoidance device, a robot and a storage medium.
In a first aspect, the present application provides a robot safety obstacle avoidance method, which is characterized by comprising:
Identifying obstacle information of a first area by a first detection device of the robot;
if the obstacle information is target obstacle information, updating a static change map of the robot according to the target obstacle information to obtain a first static change map;
Updating the first static change map according to first detection information and auxiliary detection information to obtain a second static change map, wherein the first detection information is information detected by the first detection equipment, and the auxiliary detection information is first ground distance information detected by the auxiliary detection equipment in the robot;
updating the second static change map according to the second ground distance information detected by the auxiliary detection equipment to obtain a third static change map;
And planning a path according to the third static change map to obtain the running path information of the robot.
Optionally, the first detection device includes a ranging device and an image recognition device, and the identifying, by the first detection device of the robot, obstacle information of the first area includes:
Acquiring distance information detected by the distance measuring equipment and image information detected by the image identifying equipment;
And carrying out obstacle recognition according to the distance information and/or the image information to obtain obstacle information of the first area.
Optionally, the identifying the obstacle according to the distance information and/or the image information to obtain the obstacle information of the first area includes:
identifying first obstacle information based on the distance information and/or identifying second obstacle information based on the image information;
Determining whether map obstacle region information of the robot contains the first obstacle information and/or the second obstacle information, wherein the map obstacle region information contains obstacle region information in the static change map and obstacle region information in an original map of the robot;
if the map obstacle region information does not contain the first obstacle information and/or the second obstacle information, determining the first obstacle information and/or the second obstacle information as the target obstacle information;
And if the map obstacle region information comprises the first obstacle information and/or the second obstacle information, determining the first obstacle information and/or the second obstacle information as non-target obstacle information.
Optionally, updating the static change map of the robot according to the target obstacle information to obtain a first static change map includes:
Determining a target obstacle corresponding to the target obstacle information;
If the target obstacle is an active moving obstacle, adding the target obstacle information to a dynamic change map to obtain a first dynamic change map;
and if the target obstacle is a passive moving obstacle, adding the target obstacle information to a static change map to obtain a first static change map.
Optionally, the updating the first static change map according to the first detection information and the auxiliary detection information to obtain a second static change map includes:
Acquiring first distance information detected by the distance measuring equipment, first image information detected by the image identifying equipment and first ground distance information detected by the auxiliary detecting equipment;
Updating first characteristic obstacle information in the first static change map based on the first distance information, and updating second characteristic obstacle information in the first static change map based on the first image information and the first ground distance information to obtain an updated static change map;
And determining the updated static change map as the second static change map.
Optionally, the path planning according to the third static change map, to obtain the driving path information of the robot, includes:
Combining the third static change map with the original map and the first dynamic change map to obtain a fusion map;
And carrying out path planning based on the fusion map to obtain the driving path information of the robot.
Optionally, the updating the second static change map according to the second ground distance information detected by the auxiliary detection device to obtain a third static change map includes:
acquiring second ground distance information detected by the auxiliary detection equipment;
If the second ground distance information is smaller than a preset first distance threshold value, determining target feature obstacle information based on the second ground distance information, and updating the second static change map according to the target feature obstacle information to obtain a third static change map;
And if the second ground distance information is larger than a second distance threshold, determining ground condition information according to the second ground distance information, and updating the second static change map according to the ground condition information to obtain a third static change map, wherein the second distance threshold is larger than the first distance threshold.
Optionally, the method further comprises:
Acquiring second image information and second distance information, wherein the second image information is the image information of a second area detected by the image recognition equipment, and the second distance information is the distance information of the second area detected by the distance measurement equipment;
Determining obstacle information of a second area based on the second image information and the second distance information;
Determining movement attribute information corresponding to the obstacle information of the second area;
If the movement attribute information is active attribute information, based on the active attribute information, adding the obstacle information of the second area to a dynamic change map of the robot;
And if the movement attribute information is passive attribute information, adding the obstacle information of the second area to a static change map of the robot based on the passive attribute information.
In a second aspect, the present application provides a robot safety obstacle avoidance device comprising:
A first detection and identification module for identifying obstacle information of a first area through a first detection device of the robot;
the first updating module of the static change map is used for updating the static change map of the robot according to the target obstacle information when the obstacle information is the target obstacle information, so as to obtain a first static change map;
the second updating module of the static change map is used for updating the first static change map according to first detection information and auxiliary detection information to obtain a second static change map, wherein the first detection information is information detected by the first detection equipment, and the auxiliary detection information is first ground distance information detected by the auxiliary detection equipment in the robot;
The third updating module of the static change map is used for updating the second static change map according to the second ground distance information detected by the auxiliary detection equipment to obtain a third static change map;
And the path planning module is used for carrying out path planning according to the third static change map to obtain the driving path information of the robot.
In a third aspect, the present application provides a robot, comprising a processor, a communication interface, a memory, and a communication bus, wherein the processor, the communication interface, and the memory communicate with each other via the communication bus;
a memory for storing a computer program;
and the processor is used for realizing the steps of the robot safety obstacle avoidance method according to any one of the embodiments of the first aspect when executing the program stored in the memory.
In a fourth aspect, the present application provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the robot safety obstacle avoidance method according to any of the embodiments of the first aspect.
In summary, the obstacle information of the first area is identified through the first detection equipment of the robot, when the obstacle information is taken as the target obstacle information, the static change map of the robot is updated according to the target obstacle information, so that the first static change map is obtained, then the first static change map is updated according to the first detection information and the auxiliary detection information, the second static change map is obtained, the second static change map is updated according to the second ground distance information detected by the auxiliary detection equipment, the third static change map is obtained, and therefore, the path planning is carried out according to the third static change map, the driving path information of the robot is obtained, the robot can safely drive according to the driving path information, the safe obstacle avoidance of the robot is realized, and the problems that the obstacle is difficult to identify through the depth camera when the existing indoor robot enters the environment with dark light, the mirror reflection object and the transparent object cannot be detected, and the potential safety hazard exists when the indoor robot drives are solved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the application or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, and it will be obvious to a person skilled in the art that other drawings can be obtained from these drawings without inventive effort.
Fig. 1 is a schematic flow chart of a robot safety obstacle avoidance method according to an embodiment of the present application;
FIG. 2 is a schematic step flow diagram of a robot safety obstacle avoidance method according to an alternative embodiment of the present application;
FIG. 3 is a schematic step flow diagram of a robot safety obstacle avoidance method according to an alternative embodiment of the present application;
fig. 4 is a block diagram of a robot safety obstacle avoidance apparatus according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a robot according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
In a specific implementation, laser devices such as a 3D laser radar and a 2D laser radar and a depth camera are installed in a front area of an indoor robot, so that the indoor robot can only detect obstacle information in the front area, but the left, right and rear obstacle information of the indoor robot cannot be detected, and further, the indoor robot cannot acquire the left, right and rear obstacle information in a driving process, so that a great potential safety hazard exists.
One of the core ideas of the embodiment of the application is that a robot safety obstacle avoidance method is provided, obstacle information is determined through a first detection device and an auxiliary detection device of the robot, so that the robot can conduct path planning based on the detected obstacle information to obtain driving path information, the robot can conduct a safety driving path according to the driving path information, and the robot safety obstacle avoidance is realized.
In a specific implementation, the embodiment of the application can divide a surrounding area (including a front area, a left front area and a right front area) in front of the robot into a far area and a near area, and can use detection equipment for detecting and identifying obstacles in the front area of the robot as first detection equipment of the robot, namely the first detection equipment of the robot can be used for detecting and identifying obstacles in the far area and obstacles in the near area of the robot, and use equipment for detecting and identifying obstacles in the near area only as auxiliary detection equipment of the robot, namely the auxiliary detection equipment of the robot can be used for detecting and identifying obstacles in the near area of the robot, and the auxiliary detection equipment is arranged in front of, left of, right of and behind the robot respectively, so that the robot can acquire left obstacle information, right obstacle information and rear obstacle information in the driving process, and safety obstacle avoidance can be carried out according to the detected obstacle information.
For the purpose of facilitating an understanding of the embodiments of the present application, reference will now be made to the following description of specific embodiments, taken in conjunction with the accompanying drawings, which are not intended to limit the embodiments of the application.
Referring to fig. 1, a step flow chart of a robot safety obstacle avoidance method provided by an embodiment of the application is shown. In a specific implementation, the robot safety obstacle avoidance method provided by the embodiment of the application specifically includes the following steps:
Step 110, identifying obstacle information of a first area by a first detection device of the robot.
Specifically, the first area may be a near area in a surrounding area in front of the robot, and the obstacle of the recognized near area may be used as the obstacle information of the first area by the first detection device. Specifically, the obstacle information of the first area may be identified by the laser radar apparatus and the camera apparatus, for example, the distance of the robot from each obstacle in the first area may be identified by the laser radar apparatus, and the image including the obstacle of the first area may be acquired by the camera apparatus, and then the obstacle information of the first area may be detected according to the image identification, and then the obstacle information of the first area may be determined according to the distance information of the obstacle of the first area identified by the laser radar apparatus and the image information identified by the camera apparatus. Therefore, in the embodiment of the present application, the first detection device may include a ranging device, which may be a laser radar device, an image recognition device, which may be a camera device, and the like, which is not limited in the embodiment of the present application. For example, in the case where the robot mounts the laser radar apparatus and the camera apparatus, the laser radar apparatus and the camera apparatus may be used as the first detection apparatus of the robot to identify an obstacle in a near area of the robot by the laser radar apparatus and the camera apparatus, and generate obstacle information of the corresponding first area. Wherein the obstacle information of the first area may be used to represent an obstacle of the first area, which is a near area of the robot. It should be noted that the lidar device may include various types of lidar devices, for example, may be a 2D lidar device, or may be a 3D lidar device, which is not limited in the embodiment of the present application; the camera device may refer to a device comprising a camera, such as a depth camera.
As an example of the present application, the laser radar apparatus may be installed at a position in front of the robot at a height of 15 cm from the bottom, the angle range of the horizontal plane in which the laser radar apparatus is located is 180 degrees, and the camera apparatus may be installed at a position in front of the robot at a height of 1 meter from the bottom, and recognition of obstacle information of the first area is achieved by installing the laser radar apparatus and the camera apparatus for the robot.
Further, the first region may be a near region in a front region of the robot, and specifically, an auxiliary sensor may be installed for the robot, such as an ultrasonic radar and/or a millimeter wave radar, etc., to which the present application is not limited. The auxiliary sensor may be used as an auxiliary detecting device to divide the robot front area into a near area and a far area according to the distance from the auxiliary sensor to the ground.
In the actual processing, a near area in a front area of the robot may be determined as a first area, and a far area in the front area of the robot may be determined as a second area. The obstacle identified by the robot in the first area may be used as the obstacle information of the first area.
In a specific implementation, the front area of the robot may also be determined as a right front area, a left front area, and a right front area. Specifically, two auxiliary sensors, such as a No. 3 sensor and a No. 4 sensor, may be installed right in front of the robot, wherein the No. 3 sensor and the No. 4 sensor may be auxiliary sensors with a measurement angle of 30 degrees, and the installation height may be 25 cm, which is not limited in the present application. These two auxiliary sensors can then be taken as the right-ahead auxiliary sensor and can be calculated by the formula: d max =30/sin ((30/2) °) ≡120, giving a maximum distance d max of 120 cm from the ground of the front sensor.
Similarly, two auxiliary sensors, such as a sensor 1 and a sensor 2, may be installed at the left front of the robot, where the sensor 1 and the sensor 2 may be auxiliary sensors with a measurement angle of 45 degrees, and the installation height may be 35 cm, which is not limited in the present application. These two auxiliary sensors may then be referred to as left front Fang Fuchu sensors and may be calculated by the formula: d max =35/sin ((45/2) °) ≡92, giving a maximum distance d max of 92 cm from the front left Fang Fuchu sensor to the ground. In addition, two auxiliary sensors may be installed at the right front of the robot, for example, a No. 5 sensor and a No. 6 sensor, wherein the No. 5 sensor and the No. 6 sensor may be auxiliary sensors having a measurement angle of 45 degrees, and the installation height may be 35 cm, which is not limited in the present application. These two auxiliary sensors may then be referred to as front right Fang Fuchu sensors and may be calculated by the formula: d max =35/sin ((45/2) °) ≡92, giving a maximum distance d max of 92 cm from the front right Fang Fuchu sensor to the ground.
After two auxiliary sensors are respectively installed at the left front, the right front and the right front of the robot, the obstacle of the first area can be detected by the auxiliary sensors. In the actual processing, the obstacle identified by the auxiliary sensor No. 2, the auxiliary sensor No. 3, the auxiliary sensor No.4 and the auxiliary sensor No. 5 can be combined according to the actual detection condition. If the distance between the obstacle detected by the auxiliary sensor No. 3 and the obstacle detected by the auxiliary sensor No.4 is smaller than the preset distance, the obstacle detected by the auxiliary sensor No. 3 and the obstacle detected by the auxiliary sensor No.4 may be determined to be the same obstacle, wherein the preset distance may be 15 cm, which is not limited by the present application. That is, when the distance between the obstacle detected by the No. 3 auxiliary sensor and the obstacle detected by the No.4 auxiliary sensor is less than 15 cm, it may be determined that the obstacle detected by the No. 3 auxiliary sensor and the obstacle detected by the No.4 auxiliary sensor are the same obstacle.
Furthermore, after the front area is divided into the far area and the near area through the distance from the auxiliary sensor to the ground, the obstacle information of the near area can be identified through the auxiliary sensor, the laser radar equipment and the camera equipment, so that the robot can collect the obstacle information more comprehensively, the defect of performance of the laser radar equipment and the camera equipment is overcome through the auxiliary sensor, and the safety of navigation is improved.
In the actual processing, in addition to the auxiliary sensors may be installed in the front region of the robot, the auxiliary sensors may be installed in all of the left region, the right region, and the rear region of the robot so that the robot can detect obstacle information of the left region, the right region, and the rear region, respectively. Specifically, two auxiliary sensors may be installed in the left area, two auxiliary sensors may be installed in the right area, and three auxiliary sensors may be installed in the rear area of the robot, and at this time, the measurement angles of the two auxiliary sensors installed in the left area, the two auxiliary sensors installed in the right area, and the three auxiliary sensors installed in the rear area may be 60 degrees, and the installation heights of the auxiliary sensors in the left area, the right area, and the rear area may be 50 cm, which is not limited in the present application. The following can be calculated by the formula: d max =45/sin ((60/2) °) ≡90, and the maximum distances d max from the ground of the left Fang Fuchu sensor, the right auxiliary sensor, and the auxiliary sensor in the rear area are all 90 cm. By installing auxiliary sensors in the left, right and rear regions of the robot, the robot can detect obstacle information of the left, right and rear regions, respectively, and in the subsequent processing, an obstacle corresponding to the detected obstacle information can be added to the dynamic change map as an active moving obstacle.
And 120, if the obstacle information is the target obstacle information, updating the static change map of the robot according to the target obstacle information to obtain a first static change map.
Specifically, the target obstacle corresponding to the target obstacle information may be an obstacle not included in the original map and the static map. Specifically, after the first detection device identifies the obstacle information of the near area, the obstacle information may be compared with the obstacle information included in the original map and the obstacle information included in the static change map, and if it is determined that the obstacle information does not exist in the original map and the static change map, the obstacle information may be determined as the target obstacle information, and an obstacle corresponding to the obstacle information may be determined as the target obstacle. The static change map may then be updated based on the target obstacle information. If the obstacle corresponding to the target obstacle information is a passive moving obstacle, the obstacle corresponding to the target obstacle information is added to the static change map to update the static change map, and the updated static change map may be used as the first static change map.
In a specific implementation, in the running process of the robot, a MAP building module can be used for building a grid MAP of a space where the robot is located in Real time in the moving process of the robot by utilizing MAP building algorithms such as a MAP making algorithm (Cartographer), a G Mapping (Gmapping) algorithm, a Real-time appearance Mapping (Real-TIME APPEARANCE-Based Mapping) algorithm and the like, and matching with laser radar equipment and a depth camera which are installed in the robot in advance, and then the grid MAP can be used as an original MAP. After the first detection device identifies the obstacle information of the near area, the identified obstacle information can be compared with the obstacle information contained in the original map of the robot and the obstacle information contained in the static change map, so that whether the identified obstacle information of the near area exists in the obstacle information contained in the original map and the obstacle information contained in the static change map can be determined. If the obstacle information of the recognized near area exists in the original map and the static change map, the obstacle information may not be determined. If the obstacle information of the recognized near area does not exist in the original map and the static change map, the obstacle information may be determined as target obstacle information. It may then be further determined whether the target obstacle corresponding to the target obstacle information is a passive movement obstacle. If the target obstacle is a passive moving obstacle, the target obstacle information can be added to the static change map to obtain a first static change map. If the target obstacle corresponding to the target obstacle information is an active moving obstacle, the target obstacle information may be added to the dynamic change map.
And 130, updating the first static change map according to first detection information and auxiliary detection information to obtain a second static change map, wherein the first detection information is information detected by the first detection equipment, and the auxiliary detection information is first ground distance information detected by the auxiliary detection equipment in the robot.
Specifically, the first detection device may identify the obstacle in the near area, the identified obstacle information in the near area is used as first detection information, and the auxiliary sensor in the auxiliary detection device may identify the obstacle in the near area, for example, the distance from the auxiliary sensor to the ground may be used as first ground distance information, the first ground distance information may be determined as auxiliary detection information, and further whether the auxiliary sensor detects the obstacle may be determined according to the auxiliary detection information. And then updating the first static change map according to the obstacle information of the near area corresponding to the first detection information and the obstacle information of the near area corresponding to the auxiliary detection information so as to obtain a second static change map.
Specifically, the obstacle information corresponding to the first detection information may be compared with the obstacle region information in the first static change map to determine whether the obstacle region information in the first static change map corresponds to the obstacle information corresponding to the first detection information. And if obstacle information which does not accord with the first detection information exists in the obstacle area information in the first static change map, deleting the corresponding obstacle information in the first static change map to obtain a second static change map. Similarly, the obstacle information detected by the auxiliary detection device may be compared with the obstacle information in the first static change map to determine whether the obstacle information in the first static change map corresponds to the obstacle information corresponding to the auxiliary detection information. And if the obstacle area information in the first static change map contains obstacle information which does not accord with the auxiliary detection information, deleting the corresponding obstacle information in the first static change map to obtain a second static change map.
For example, when the lidar device in the first detection device does not detect certain obstacle information included in the static change map, the obstacle information may be deleted from the static change map, so as to obtain the second static change map. Similarly, when the camera device in the first detection device does not detect certain obstacle information contained in the static change map, the obstacle information may be deleted from the static change map, so as to obtain a second static change map.
Further, the auxiliary detecting device may determine whether the robot monitors the obstacle information in the near area based on the first ground distance information in the auxiliary detecting information. Specifically, it may be determined whether an obstacle is detected directly in front of the robot based on comparison between six auxiliary sensors installed in a front region of the robot and the first ground distance information and the maximum distance d max of the auxiliary sensors from the ground.
For example, the maximum distance from the auxiliary sensor installed right in front of the robot to the ground may be 120 cm, and the auxiliary sensor right in front of the robot may use the ground distance detected in real time as the first ground distance information to compare the first ground distance information with the maximum distance, and further determine whether an obstacle exists in the area right in front of the robot according to the comparison result. Specifically, if the ground distance corresponding to the first ground distance information is greater than 120 cm and less than 125 cm, it may be determined that the auxiliary sensor does not detect an obstacle in an area right in front of the robot, and if obstacle information located at the position exists in the obstacle area information included in the original map and the first static change map, the obstacle information is deleted in the first static change map, and a second static change map is obtained.
In the real-time moving process of the robot, the laser radar equipment, the camera equipment and the auxiliary sensor are used for detecting the obstacle in real time, the detected obstacle is added into the first static change map, the obstacle information in the first static change map is updated continuously, the accuracy of the map is improved, and when the path planning is carried out, the real-time performance and the safety of the planned path are guaranteed, so that the robot can avoid the obstacle safely.
And 140, updating the second static change map according to the second ground distance information detected by the auxiliary detection equipment to obtain a third static change map.
Specifically, the auxiliary detection device may determine whether to detect the obstacle information according to the second ground distance information of the detected near area, so as to add the detected obstacle information to the second static change map, so as to update the second static change map, so as to obtain the third static change map.
In a specific implementation, if the ground distance corresponding to the second ground distance information detected by the auxiliary sensor in the auxiliary detection device is smaller than the maximum distance d max, it may be determined that an obstacle exists in the area directly in front of the robot, at this time, the obstacle information may be compared with the obstacle area information included in the original map and the obstacle area information included in the first static change map, and if the obstacle information does not exist in both the original map and the first static change map, the obstacle corresponding to the obstacle information may be determined as a specular reflection obstacle or a transparent obstacle. The second static change map may then be updated according to the specular or transparent obstacle information, e.g., the specular or transparent obstacle information may be added to the second static change map to obtain a third static change map.
It can be seen that in the embodiment of the present application, the detection of the specular reflection object and the transparent object may be implemented by the auxiliary detection device, so that the detected specular reflection object and transparent object are added to the second static map to obtain the third static change map. The robot can conduct path planning according to the specular reflection obstacle and the transparent obstacle contained in the third static change map in the subsequent path planning, so that the robot can avoid the specular reflection object and the transparent object in the running process, and safety obstacle avoidance is achieved.
And 150, planning a path according to the third static change map to obtain the driving path information of the robot.
Specifically, after the third static change map is determined, the robot can conduct path planning according to the third static change map, so that the driving path information of the robot can be determined, and the robot can be controlled to conduct safe obstacle avoidance driving according to the driving path information.
In the actual processing, the third static change map, the original map and the dynamic change map can be combined to obtain a combined map, and the robot can conduct route planning according to the combined map to obtain the driving route information. The dynamic change map may include information of active moving obstacles detected by the robot during driving.
In a specific implementation, the position of the robot in the map can be determined through a positioning module, for example, the data of an inertial measurement unit (Inertial Measurement Unit, IMU), a ranging method (Odometry), a laser radar and a depth camera which are installed on the robot can be collected, and the position of the robot in a space coordinate system can be determined, so that the position of the robot in the map coordinate system can be obtained through the conversion relation between the space coordinate system and the map coordinate system, and the real-time positioning of the robot is realized. And then, the navigation module is used for sensing the obstacle information in real time by using the constructed map data and the first detection equipment, planning the path, and after determining the driving path of the robot, sending the planning result to the control module, and controlling the robot to drive by avoiding the obstacle according to the requirement through the control module.
In summary, the embodiment of the application identifies the obstacle information of the first area through the first detection equipment of the robot, updates the static change map of the robot according to the target obstacle information when the obstacle information is the target obstacle information to obtain the first static change map, then updates the first static change map according to the first detection information and the auxiliary detection information to obtain the second static change map, wherein the first detection information is the information detected by the first detection equipment, the auxiliary detection information is the first ground distance information detected by the auxiliary detection equipment in the robot, and then updates the second static change map according to the second ground distance information detected by the auxiliary detection equipment to obtain the third static change map, and performs path planning according to the third static change map to obtain the running path information of the robot, so that the robot can safely run according to the running path information, the safety avoidance of the robot is realized, the problem that the existing indoor robot is difficult to identify the obstacle through the depth camera when entering the light environment, and the hidden danger of the mirror reflection object and the transparent object cannot be detected, and the safety hidden danger of the indoor robot is caused when the robot runs is solved.
Referring to fig. 2, a schematic flow chart of steps of a robot safety obstacle avoidance method according to an alternative embodiment of the present application is shown. The robot safety obstacle avoidance method can specifically comprise the following steps:
step 210, obtaining distance information detected by the distance measuring device and image information detected by the image identifying device.
Step 220, performing obstacle recognition according to the distance information and/or the image information, so as to obtain obstacle information of the first area.
Specifically, the ranging apparatus may be a laser radar apparatus, and the image recognition apparatus may be a camera apparatus, to which the present application is not limited. The real-time distance between the robot and the obstacle in the first area may be detected by the laser radar apparatus, the detected distance may be used as distance information, the camera apparatus may detect image information of the obstacle in the first area, and the obstacle in the first area may be identified from the image information.
Step 230, if the obstacle information is the target obstacle information, updating the static change map of the robot according to the target obstacle information to obtain a first static change map.
In an optional embodiment of the present application, the updating the static change map of the robot according to the target obstacle information to obtain the first static change map may specifically include the following substeps
Sub-step 2301, determining a target obstacle corresponding to the target obstacle information.
Sub-step 2302, if the target obstacle is an actively moving obstacle, adding the target obstacle information to a dynamic change map to obtain a first dynamic change map.
Sub-step 2303, if the target obstacle is a passive moving obstacle, adding the target obstacle information to a static change map to obtain a first static change map.
Specifically, after the first detection device detects the obstacle information, if the original map and the static change map do not include the obstacle information, the obstacle information may be determined to be target obstacle information. Then, whether the obstacle corresponding to the target obstacle information is an active moving obstacle or not can be determined, and if the target obstacle is the active moving obstacle, the target obstacle information can be added into the dynamic change map to obtain a first dynamic change map. If the target obstacle is a passive moving obstacle, the target obstacle information can be added to the static change map to obtain a first static change map.
Step 240, updating the first static change map according to the first detection information and the auxiliary detection information to obtain a second static change map.
The first detection information is information detected by the first detection equipment, and the auxiliary detection information is first ground distance information detected by the auxiliary detection equipment in the robot.
And step 250, updating the second static change map according to the second ground distance information detected by the auxiliary detection device to obtain a third static change map.
And 260, merging the third static change map with the original map and the first dynamic change map to obtain a merged map.
And step 270, performing path planning based on the fusion map to obtain the driving path information of the robot.
Specifically, after the third static change map is obtained, the third static change map, the original map and the first dynamic change map may be combined to obtain a fusion map including the passive moving obstacle information in the third static change map, the active moving obstacle information in the first dynamic change map and the obstacle information in the original map. The robot can utilize obstacle information contained in the fusion map and obstacle information perceived by the laser radar equipment and the camera equipment in real time to determine the driving path information of the robot, and then the navigation module can plan the driving path of the robot and send a planning result to the control module, so that the control module can control the robot to safely avoid obstacle to drive according to a navigation command sent by the navigation module.
In an optional embodiment, the path planning is performed based on the fusion map to obtain the driving path information of the robot, and specifically further includes the following sub-steps:
Sub-step 2701, obtaining second image information and second distance information, wherein the second image information is the image information of the second area detected by the image recognition device, and the second distance information is the distance information of the second area detected by the distance measurement device.
Specifically, after dividing the robot front area into a near area and a far area, the far area may be regarded as the second area. Detecting obstacles in a remote area through a distance measuring device to obtain real-time distance between the robot and the obstacles in the remote area, wherein the real-time distance is used as second distance information, and obstacle recognition can be performed according to the second distance information; image information of the distant area is detected as second image information by the image recognition device, and obstacle recognition can be performed based on the second image information.
A substep 2702 determines obstacle information of a second area based on the second image information and the second distance information.
Specifically, the obstacle information of the remote area corresponding to the second image information may be compared with the obstacle information of the remote area corresponding to the second distance information, and if a certain obstacle information corresponding to the second image information is not included in the second distance information, the obstacle corresponding to the obstacle information may be determined as a small obstacle in the remote area. If some obstacle information corresponding to the second image information is included in the second distance information, the obstacle corresponding to the obstacle information can be determined to be a large obstacle in a far area, so that classification and identification of the large obstacle and the small obstacle in the far area can be realized. The large obstacle in the far area and/or the small obstacle in the far area may then be compared with the obstacle in the original map, and if the large obstacle in the far area and/or the small obstacle in the far area is not included in the original map, the large obstacle in the far area and/or the small obstacle in the far area may be determined as the obstacle information of the second area.
Specifically, the obstacle information of the second area may be obstacle information which is recognized by the camera device or the lidar device as being located in a remote area and which is not present in the original map.
Sub-step 2703, determining movement attribute information corresponding to obstacle information of the second area.
Sub-step 2704 is to add obstacle information of the second area to a dynamic change map of the robot based on the active attribute information if the movement attribute information is active attribute information.
Sub-step 2705 is to add obstacle information of the second area to a static change map of the robot based on the passive attribute information if the movement attribute information is passive attribute information.
Specifically, the movement attribute information corresponding to the obstacle information of the second area may be determined to be whether the obstacle of the second area is an active movement obstacle or a passive movement obstacle. Specifically, the movement attribute information corresponding to the obstacle information may be classified into active attribute information and passive attribute information, wherein the active attribute information may refer to that the obstacle corresponding to the obstacle information is an active movement obstacle, and the passive attribute information may refer to that the obstacle corresponding to the obstacle information is a passive movement obstacle. If the movement attribute information corresponding to the obstacle information of the second area is active attribute information, it may be determined that the obstacle corresponding to the obstacle information of the second area is an active movement obstacle, and then the obstacle may be added to the dynamic change map. If the movement attribute information corresponding to the obstacle information of the second area is passive attribute information, it may be determined that the obstacle corresponding to the obstacle information of the second area is a passive movement obstacle, and then the obstacle may be added to the static change map.
In a specific implementation, in the process of safe obstacle avoidance running of the robot, personnel identified by the laser radar equipment and the camera equipment and/or other mobile robots and the like can be used as active mobile obstacles, and static obstacles with changed positions identified after an original map is constructed can be used as passive mobile obstacles. Specifically, the camera equipment can be used for identifying active moving obstacles such as personnel and other mobile robots in real time, the laser radar equipment can be used for determining the position change information of the active moving obstacles, correcting the positions of the active moving obstacles, and determining the position change of the active moving obstacles in a dynamic change map. The dynamic change map is continuously set as a blank map, and the newly identified active moving obstacle is added into the dynamic change map, so that the dynamic change map is updated. In the moving process of the robot, the identified passive moving obstacle can be added to the static change map, and whether the passive moving obstacle really exists or not can be determined in a mode of multiple identification and positioning in order to ensure that the identified passive moving obstacle really exists. Specifically, a threshold number of times of identification positioning may be set. If the identification and positioning times of a certain passive moving obstacle exceeds a preset identification and positioning times threshold value and the identification and positioning results are that the passive moving obstacle exists, the passive moving obstacle can be determined to exist truly, and the passive moving obstacle can be added into a static change map of the operation. The common change portion in the static change map of the continuous plural jobs may then be taken as the trusted change map, e.g., the common change portion in the static change map of the continuous 3 jobs may be taken as the trusted change map. The trusted variation map may then be used as a new original map with the initial map obtained at the time of mapping. Meanwhile, in the continuous running process of the robot, the dynamic change map can be updated according to the active moving obstacle detected in real time, and the static change map can be updated according to the passive moving obstacle detected in real time. If the laser radar device does not recognize a large obstacle in the static change map, the obstacle information is deleted from the static change map. Similarly, if the depth camera and the auxiliary sensor cannot identify a small obstacle in the static change map, the obstacle information is deleted from the static change map. If the static change map which is continuously operated for a plurality of times deletes a certain obstacle area, the obstacle area can be deleted from the credible change map, and a new original map can be updated according to the credible change map. In the subsequent processing, the first dynamic change map, the third static change map and the new original map which are obtained after updating can be combined to obtain a fusion map by continuously updating the dynamic change map and the static change map, so that the fusion map can be updated in real time according to the real-time change condition of the obstacle in the driving process of the robot, the obstacle information in the map is more accurate, and the path planning can be performed according to the obstacle information in the fusion map, so that the safe obstacle avoidance driving of the robot is realized.
In the actual process, if it is determined that the current distance from a certain active moving obstacle is smaller than the specified distance, the robot may stop driving, for example, the specified distance may be 0.5m, which is not limited in the present application. After the robot stops running, the robot can wait for the active moving obstacle to automatically keep away and then start running, or a time threshold can be set, for example, the time threshold can be 3 seconds, and when the waiting time of the robot exceeds the time threshold, if the position of the active moving obstacle on a map does not change, the robot can perform obstacle avoidance running and can bypass the active moving obstacle.
It can be seen that, in the embodiment of the application, by acquiring the distance information detected by the ranging device and the image information detected by the image recognition device, and performing obstacle recognition according to the distance information and/or the image information, so as to obtain the obstacle information of the first area, update the static change map of the robot according to the target obstacle information when the obstacle information is the target obstacle information, so as to obtain the first static change map, then update the first static change map according to the first detection information and the auxiliary detection information, so as to obtain the second static change map, update the second static change map according to the second ground distance information detected by the auxiliary detection device, so as to obtain the third static change map, and further merge the third static change map with the original map and the first dynamic change map, so as to obtain the merged map, and further, based on the merged map, carry out path planning, obtain the driving path information of the robot, so that the robot can safely drive according to the driving path information, realize the safety avoidance of the robot, and solve the problems that the existing indoor robot can not easily detect the hidden danger of the object through depth and the object can not be detected when the camera is in the dark environment, and the object can not be detected in the indoor environment.
Referring to fig. 3, a schematic flow chart of steps of a robot safety obstacle avoidance method according to an alternative embodiment of the present application is shown. The robot safety obstacle avoidance method can specifically comprise the following steps:
step 310, obtaining distance information detected by the distance measuring device and image information detected by the image identifying device.
In step 320, first obstacle information is identified based on the distance information and/or second obstacle information is identified based on the image information.
Specifically, the laser radar device may be used as a distance measuring device, and the laser radar device detects the distance between the robot and the obstacle in the near area as distance information, so that the first obstacle information may be identified based on the distance information; the camera device can be used as the image recognition device, and the camera device can be used for detecting the image information of the obstacle in the near area, so that the obstacle can be recognized based on the image information, whether the obstacle is an actively moving obstacle or not can be determined, and the classification recognition of the obstacle is realized. Specifically, the obstacle included in the image information may be identified in real time based on the detected image information, it is determined whether the obstacle is an active moving obstacle or a passive moving obstacle, and the moving position of the active moving obstacle in the obstacle may be determined using the distance information detected by the lidar.
Step 330, determining whether the map obstacle region information of the robot includes the first obstacle information and/or the second obstacle information.
Wherein the map obstacle region information includes obstacle region information in the static change map and obstacle region information in an original map of the robot.
And step 340, if the map obstacle region information does not include the first obstacle information, determining the first obstacle information as the target obstacle information.
And step 350, if the map obstacle region information does not contain the second obstacle information, determining the second obstacle information as the target obstacle information.
Specifically, the target obstacle information may be obstacle information not included in the original map and the static change map. Specifically, after the first obstacle information of the near area is detected by the lidar device and the second obstacle information of the near area is detected by the camera device, the first obstacle information may be compared with the map obstacle area information, so as to determine whether the first obstacle information is included in the map obstacle area information, and further, whether the first obstacle information is the target obstacle information may be determined. Similarly, the second obstacle information may be compared with the map obstacle region information, thereby determining whether the second obstacle information is included in the map obstacle region information, and further determining whether the second obstacle information is the target obstacle information.
For example, in a case where the map obstacle region information does not include the first obstacle information, the first obstacle information may be determined as target obstacle information, and the static change map may be updated according to the target obstacle in the subsequent processing, for example, in a case where an obstacle corresponding to the target obstacle information is a passive movement obstacle, the target obstacle information may be added to the static change map, so as to obtain the first static change map. In the case where the map obstacle region information includes first obstacle information, the first obstacle information may be determined as non-target obstacle information, and the first obstacle information may not be processed. Similarly, in the case where the map obstacle region information does not include the second obstacle information, the second obstacle information may be determined as the target obstacle information, and the static change map may be updated according to the target obstacle in the subsequent processing, for example, in the case where the obstacle corresponding to the target obstacle information is a passive movement obstacle, the target obstacle information may be added to the static change map, so as to obtain the first static change map. In the case where the map obstacle region information includes second obstacle information, the second obstacle information may be determined as non-target obstacle information, and the second obstacle information may not be processed.
In a specific implementation, the laser radar detection device and the camera device may be used to identify an obstacle in the near area, and compare the identified obstacle with the obstacle in the original map and the static change map, and identify the obstacle information not in the original map and the static change map, so as to use the obstacle information as target obstacle information.
In the actual processing, after the first obstacle information and/or the second obstacle information are determined to be the target obstacle information, the target obstacle information may be determined, and according to the determination result, whether to add the target obstacle information to the static change map or the dynamic change map is determined. Specifically, the camera device may identify the detected obstacle, if the detected obstacle is an actively moving obstacle, for example, the actively moving obstacle may be a person or other robot, or the like, and the application is not limited thereto, and may determine that the obstacle is a target obstacle and the target obstacle is an actively moving obstacle if it is determined that neither the original map nor the static map contains the obstacle, and then may add the target obstacle to the dynamic change map, so as to update the dynamic change map. Similarly, if the obstacle detected by the camera device is a passive moving obstacle, it may be determined that the obstacle is a target obstacle and the target obstacle is a passive moving obstacle under the condition that it is determined that neither the original map nor the static map includes the obstacle, and then the target obstacle may be added to the static change map, so as to update the static change map.
And step 360, if the obstacle information is the target obstacle information, updating the static change map of the robot according to the target obstacle information to obtain a first static change map.
And step 370, updating the first static change map according to the first detection information and the auxiliary detection information to obtain a second static change map.
The first detection information is information detected by the first detection equipment, and the auxiliary detection information is first ground distance information detected by the auxiliary detection equipment in the robot.
And step 380, updating the second static change map according to the second ground distance information detected by the auxiliary detection equipment to obtain a third static change map.
In an optional embodiment of the present application, the updating the second static change map according to the second ground distance information detected by the auxiliary detecting device to obtain a third static change map may specifically include the following substeps:
in step 3801, second ground distance information detected by the auxiliary detecting device is obtained.
And sub-step 3802, if the second ground distance information is smaller than a preset first distance threshold, determining target feature obstacle information based on the second ground distance information, and updating the second static change map according to the target feature obstacle information to obtain the third static change map.
Specifically, the second ground distance information may be a distance from an obstacle in the near area detected by an auxiliary sensor in the auxiliary detection apparatus, and the first distance threshold may be a maximum distance of the auxiliary sensor to the ground in the near area.
For example, the maximum distance d max of the two auxiliary sensors installed right in front of the robot to the ground may be 120 cm, in which case the first distance threshold may be 120 cm, and the maximum distance d max of the two auxiliary sensors installed left in front of the robot and the two auxiliary sensors installed right in front of the robot may be 92 cm, in which case the first distance threshold may be 92 cm, as this example is not limiting. In the moving process of the robot, the distance of the ground detected by the auxiliary sensor can be used as second ground distance information, so that whether an obstacle is detected or not can be determined according to the comparison between the ground distance corresponding to the second ground distance information and the set maximum distance d max. If the ground distance corresponding to the second ground distance information is smaller than the set maximum distance d max, it can be determined that the auxiliary sensor detects the obstacle, and then it can be determined that the obstacle exists in the area in front of the robot.
In the actual processing, if the ground distance corresponding to the second ground distance information is smaller than the set maximum distance d max, it may be determined that an obstacle exists in the area in front of the robot, the detected obstacle may be compared with the static change map and the original map, and if the static change map and the original map do not include the obstacle, it may be determined that the obstacle is a specular reflection obstacle or a transparent obstacle, for example, the specular reflection obstacle or the transparent obstacle may be glass or the like, which is not limited in the present application. The specular or transparent obstacle may be added to the second static change map to obtain a third static change map. And then, path planning can be carried out according to the third static change map, so that the detection of the specular reflection object and the transparent object is realized, the robot can be controlled to avoid the specular reflection object and the transparent object in the running process, and further, the obstacle avoidance of the specular reflection object and the transparent object is realized.
And step 3803, determining ground condition information according to the second ground distance information if the second ground distance information is greater than a second distance threshold, and updating the second static change map according to the ground condition information to obtain a third static change map.
Wherein the second distance threshold is greater than the first distance threshold.
In particular, the second distance threshold may be greater than the first distance threshold. If the first distance threshold may be d max, the second distance threshold may be d max +5 cm, and when the ground distance corresponding to the second ground distance information is greater than the second distance threshold, it may be determined that the ground condition information is that the front road surface is lower than the road surface on which the robot is located, or that there is a falling step on the front road surface. At this time, the second static change map may be updated according to the road surface condition information to obtain a third static change map. And then, re-planning the running path of the robot according to the updated third static change map through the navigation module, and controlling the running of the robot through the control module so that the robot can run safely.
And step 390, performing path planning according to the third static change map to obtain the driving path information of the robot.
It can be seen that, in the embodiment of the application, by acquiring the distance information detected by the ranging device and the image information detected by the image recognition device, the first obstacle information can be identified based on the distance information, and/or the second obstacle information can be identified based on the image information, so that when the map obstacle area information of the robot is determined to not contain the first obstacle information, the first obstacle information is determined to be the target obstacle information, when the map obstacle area information of the robot is determined to not contain the second obstacle information, the second obstacle information is determined to be the target obstacle information, then the static change map of the robot is updated according to the target obstacle information, the first static change map is obtained, the first detection information and the auxiliary detection information can be updated to obtain the second static change map, and then the second static change map is updated according to the second ground distance information detected by the auxiliary detection device, so as to obtain the third static change map, and then the path of the robot is planned according to the third static change map, so that the robot can travel path information can be obtained, and the robot can travel in a safe way, and the hidden danger of the robot can not enter the transparent environment when the robot can travel through the camera, and the hidden danger of the robot can not be detected in the safe environment, and the transparent environment can not be prevented from being detected by the robot.
It should be noted that, for simplicity of description, the method embodiments are shown as a series of acts, but it should be understood by those skilled in the art that the embodiments are not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments.
As shown in fig. 4, an embodiment of the present application provides a robot safety obstacle avoidance apparatus 400, including:
A first detection and identification module for identifying obstacle information of a first area through a first detection device of the robot;
the first updating module of the static change map is used for updating the static change map of the robot according to the target obstacle information when the obstacle information is the target obstacle information, so as to obtain a first static change map;
the second updating module of the static change map is used for updating the first static change map according to first detection information and auxiliary detection information to obtain a second static change map, wherein the first detection information is information detected by the first detection equipment, and the auxiliary detection information is first ground distance information detected by the auxiliary detection equipment in the robot;
The third updating module of the static change map is used for updating the second static change map according to the second ground distance information detected by the auxiliary detection equipment to obtain a third static change map;
And the path planning module is used for carrying out path planning according to the third static change map to obtain the driving path information of the robot.
Optionally, the first detection device includes a ranging device and an image recognition device, and the identifying, by the first detection device of the robot, obstacle information of the first area includes:
Acquiring distance information detected by the distance measuring equipment and image information detected by the image identifying equipment;
And carrying out obstacle recognition according to the distance information and/or the image information to obtain obstacle information of the first area.
Optionally, the identifying the obstacle according to the distance information and/or the image information to obtain the obstacle information of the first area includes:
identifying first obstacle information based on the distance information and/or identifying second obstacle information based on the image information;
Determining whether map obstacle region information of the robot contains the first obstacle information and/or the second obstacle information, wherein the map obstacle region information contains obstacle region information in the static change map and obstacle region information in an original map of the robot;
if the map obstacle region information does not contain the first obstacle information and/or the second obstacle information, determining the first obstacle information and/or the second obstacle information as the target obstacle information;
And if the map obstacle region information comprises the first obstacle information and/or the second obstacle information, determining the first obstacle information and/or the second obstacle information as non-target obstacle information.
Optionally, updating the static change map of the robot according to the target obstacle information to obtain a first static change map includes:
Determining a target obstacle corresponding to the target obstacle information;
If the target obstacle is an active moving obstacle, adding the target obstacle information to a dynamic change map to obtain a first dynamic change map;
and if the target obstacle is a passive moving obstacle, adding the target obstacle information to a static change map to obtain a first static change map.
Optionally, the updating the first static change map according to the first detection information and the auxiliary detection information to obtain a second static change map includes:
Acquiring first distance information detected by the distance measuring equipment, first image information detected by the image identifying equipment and first ground distance information detected by the auxiliary detecting equipment;
Updating first characteristic obstacle information in the first static change map based on the first distance information, and updating second characteristic obstacle information in the first static change map based on the first image information and the first ground distance information to obtain an updated static change map;
And determining the updated static change map as the second static change map.
Optionally, the path planning according to the third static change map, to obtain the driving path information of the robot, includes:
Combining the third static change map with the original map and the first dynamic change map to obtain a fusion map;
And carrying out path planning based on the fusion map to obtain the driving path information of the robot.
Optionally, the updating the second static change map according to the second ground distance information detected by the auxiliary detection device to obtain a third static change map includes:
acquiring second ground distance information detected by the auxiliary detection equipment;
If the second ground distance information is smaller than a preset first distance threshold value, determining target feature obstacle information based on the second ground distance information, and updating the second static change map according to the target feature obstacle information to obtain a third static change map;
And if the second ground distance information is larger than a second distance threshold, determining ground condition information according to the second ground distance information, and updating the second static change map according to the ground condition information to obtain a third static change map, wherein the second distance threshold is larger than the first distance threshold.
Optionally, the method further comprises:
Acquiring second image information and second distance information, wherein the second image information is the image information of a second area detected by the image recognition equipment, and the second distance information is the distance information of the second area detected by the distance measurement equipment;
Determining obstacle information of a second area based on the second image information and the second distance information;
Determining movement attribute information corresponding to the obstacle information of the second area;
If the movement attribute information is active attribute information, based on the active attribute information, adding the obstacle information of the second area to a dynamic change map of the robot;
And if the movement attribute information is passive attribute information, adding the obstacle information of the second area to a static change map of the robot based on the passive attribute information.
It should be noted that, the robot safety obstacle avoidance device provided by the embodiment of the application can execute the robot safety obstacle avoidance method provided by any embodiment of the application, and has the corresponding functions and beneficial effects of the execution method.
In a specific implementation, the robot safety obstacle avoidance device can be integrated in a robot, so that the robot can plan a path according to obstacle information detected by the first detection equipment and the auxiliary detection equipment, and the robot safety obstacle avoidance device is realized. The robot may be composed of two or more physical entities or may be one physical entity, for example, the device may be a personal computer (Personal Computer, PC), a computer, a server, etc., which is not particularly limited in the embodiment of the present application.
As shown in fig. 5, an embodiment of the present application provides a robot, which includes a processor 111, a communication interface 112, a memory 113, and a communication bus 114, wherein the processor 111, the communication interface 112, and the memory 113 perform communication with each other through the communication bus 114; a memory 113 for storing a computer program; the processor 111 is configured to implement the steps of the robot safety obstacle avoidance method provided in any one of the foregoing method embodiments when executing the program stored in the memory 113. The step of the robot safety obstacle avoidance method may include the steps of identifying obstacle information of a first area through a first detection device of the robot, updating a static change map of the robot according to target obstacle information to obtain a first static change map if the obstacle information is the target obstacle information, updating the first static change map according to the first detection information and auxiliary detection information to obtain a second static change map, wherein the first detection information is information detected by the first detection device, the auxiliary detection information is information of a first ground distance detected by the auxiliary detection device in the robot, updating the second static change map according to the second ground distance information detected by the auxiliary detection device to obtain a third static change map, and planning a path according to the third static change map to obtain travel path information of the robot.
The embodiment of the application also provides a computer readable storage medium, on which a computer program is stored, which when being executed by a processor, realizes the steps of the robot safety obstacle avoidance method provided by any one of the method embodiments.
It should be noted that in this document, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing is only a specific embodiment of the application to enable those skilled in the art to understand or practice the application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (9)

1. A robot safety obstacle avoidance method, the method comprising:
Identifying obstacle information of a first area by a first detection device of the robot;
If the obstacle information is target obstacle information, updating a static change map of the robot according to the target obstacle information to obtain a first static change map, wherein the method comprises the following steps: determining a target obstacle corresponding to the target obstacle information; if the target obstacle is an active moving obstacle, adding the target obstacle information to a dynamic change map of the robot to obtain a first dynamic change map; if the target obstacle is a passive moving obstacle, adding the target obstacle information to a static change map of the robot to obtain a first static change map;
Updating the first static change map according to first detection information and auxiliary detection information to obtain a second static change map, wherein the first detection information is information detected by the first detection equipment, and the auxiliary detection information is first ground distance information detected by the auxiliary detection equipment in the robot;
updating the second static change map according to the second ground distance information detected by the auxiliary detection equipment to obtain a third static change map;
and planning a path according to the third static change map to obtain the running path information of the robot, wherein the running path information comprises the following steps: combining the third static change map with the original map of the robot and the first dynamic change map to obtain a fusion map; and carrying out path planning based on the fusion map to obtain the driving path information of the robot.
2. The method of claim 1, wherein the first detection device includes a ranging device and an image recognition device, the first detection device by the robot recognizing obstacle information of the first area, comprising:
Acquiring distance information detected by the distance measuring equipment and image information detected by the image identifying equipment;
And carrying out obstacle recognition according to the distance information and/or the image information to obtain obstacle information of the first area.
3. The method according to claim 2, wherein the performing obstacle recognition according to the distance information and/or the image information to obtain the obstacle information of the first area includes:
identifying first obstacle information based on the distance information and/or identifying second obstacle information based on the image information;
Determining whether map obstacle region information of the robot contains the first obstacle information and/or the second obstacle information, wherein the map obstacle region information contains obstacle region information in the static change map and obstacle region information in an original map of the robot;
if the map obstacle region information does not contain the first obstacle information, determining the first obstacle information as the target obstacle information;
and if the map obstacle region information does not contain the second obstacle information, determining the second obstacle information as the target obstacle information.
4. The method of claim 2, wherein updating the first static change map based on the first detection information and the auxiliary detection information to obtain a second static change map comprises:
Acquiring first distance information detected by the distance measuring equipment, first image information detected by the image identifying equipment and first ground distance information detected by the auxiliary detecting equipment;
Updating first characteristic obstacle information in the first static change map based on the first distance information, and updating second characteristic obstacle information in the first static change map based on the first image information and the first ground distance information to obtain an updated static change map;
And determining the updated static change map as the second static change map.
5. The method according to claim 1, wherein updating the second static change map according to the second ground distance information detected by the auxiliary detecting device to obtain a third static change map includes:
acquiring second ground distance information detected by the auxiliary detection equipment;
If the second ground distance information is smaller than a preset first distance threshold value, determining target feature obstacle information based on the second ground distance information, and updating the second static change map according to the target feature obstacle information to obtain a third static change map;
And if the second ground distance information is larger than a second distance threshold, determining ground condition information according to the second ground distance information, and updating the second static change map according to the ground condition information to obtain a third static change map, wherein the second distance threshold is larger than the first distance threshold.
6. The method according to any one of claims 2 to 4, further comprising:
Acquiring second image information and second distance information, wherein the second image information is the image information of a second area detected by the image recognition equipment, and the second distance information is the distance information of the second area detected by the distance measurement equipment;
Determining obstacle information of a second area based on the second image information and the second distance information;
Determining movement attribute information corresponding to the obstacle information of the second area;
If the movement attribute information is active attribute information, based on the active attribute information, adding the obstacle information of the second area to a dynamic change map of the robot;
And if the movement attribute information is passive attribute information, adding the obstacle information of the second area to a static change map of the robot based on the passive attribute information.
7. A robot safety obstacle avoidance device, comprising:
A first detection and identification module for identifying obstacle information of a first area through a first detection device of the robot;
The first updating module of the static change map is configured to update the static change map of the robot according to the target obstacle information when the obstacle information is the target obstacle information, to obtain a first static change map, and includes: determining a target obstacle corresponding to the target obstacle information; if the target obstacle is an active moving obstacle, adding the target obstacle information to a dynamic change map of the robot to obtain a first dynamic change map; if the target obstacle is a passive moving obstacle, adding the target obstacle information to a static change map of the robot to obtain a first static change map;
the second updating module of the static change map is used for updating the first static change map according to first detection information and auxiliary detection information to obtain a second static change map, wherein the first detection information is information detected by the first detection equipment, and the auxiliary detection information is first ground distance information detected by the auxiliary detection equipment in the robot;
The third updating module of the static change map is used for updating the second static change map according to the second ground distance information detected by the auxiliary detection equipment to obtain a third static change map;
The path planning module is configured to perform path planning according to the third static change map, and obtain driving path information of the robot, where the path planning module includes: combining the third static change map with the original map of the robot and the first dynamic change map to obtain a fusion map; and carrying out path planning based on the fusion map to obtain the driving path information of the robot.
8. The robot is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
a memory for storing a computer program;
A processor for implementing the steps of the robot safety obstacle avoidance method of any one of claims 1 to 6 when executing a program stored on a memory.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the robot safety obstacle avoidance method according to any of claims 1-6.
CN202111522716.4A 2021-12-13 2021-12-13 Robot safety obstacle avoidance method and device, robot and storage medium Active CN114415659B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111522716.4A CN114415659B (en) 2021-12-13 2021-12-13 Robot safety obstacle avoidance method and device, robot and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111522716.4A CN114415659B (en) 2021-12-13 2021-12-13 Robot safety obstacle avoidance method and device, robot and storage medium

Publications (2)

Publication Number Publication Date
CN114415659A CN114415659A (en) 2022-04-29
CN114415659B true CN114415659B (en) 2024-05-28

Family

ID=81265930

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111522716.4A Active CN114415659B (en) 2021-12-13 2021-12-13 Robot safety obstacle avoidance method and device, robot and storage medium

Country Status (1)

Country Link
CN (1) CN114415659B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05257533A (en) * 1992-03-12 1993-10-08 Tokimec Inc Method and device for sweeping floor surface by moving robot
CN106595631A (en) * 2016-10-25 2017-04-26 纳恩博(北京)科技有限公司 Method for avoiding obstacles and electronic equipment
CN108344414A (en) * 2017-12-29 2018-07-31 中兴通讯股份有限公司 A kind of map structuring, air navigation aid and device, system
CN109709945A (en) * 2017-10-26 2019-05-03 深圳市优必选科技有限公司 Path planning method and device based on obstacle classification and robot
CN112344945A (en) * 2020-11-24 2021-02-09 山东大学 Indoor distribution robot path planning method and system and indoor distribution robot
CN112629520A (en) * 2020-11-25 2021-04-09 北京集光通达科技股份有限公司 Robot navigation and positioning method, system, equipment and storage medium
CN112987728A (en) * 2021-02-07 2021-06-18 科益展智能装备有限公司 Robot environment map updating method, system, equipment and storage medium
CN113146683A (en) * 2021-03-18 2021-07-23 深兰科技(上海)有限公司 Robot chassis and robot

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05257533A (en) * 1992-03-12 1993-10-08 Tokimec Inc Method and device for sweeping floor surface by moving robot
CN106595631A (en) * 2016-10-25 2017-04-26 纳恩博(北京)科技有限公司 Method for avoiding obstacles and electronic equipment
CN109709945A (en) * 2017-10-26 2019-05-03 深圳市优必选科技有限公司 Path planning method and device based on obstacle classification and robot
CN108344414A (en) * 2017-12-29 2018-07-31 中兴通讯股份有限公司 A kind of map structuring, air navigation aid and device, system
CN112344945A (en) * 2020-11-24 2021-02-09 山东大学 Indoor distribution robot path planning method and system and indoor distribution robot
CN112629520A (en) * 2020-11-25 2021-04-09 北京集光通达科技股份有限公司 Robot navigation and positioning method, system, equipment and storage medium
CN112987728A (en) * 2021-02-07 2021-06-18 科益展智能装备有限公司 Robot environment map updating method, system, equipment and storage medium
CN113146683A (en) * 2021-03-18 2021-07-23 深兰科技(上海)有限公司 Robot chassis and robot

Also Published As

Publication number Publication date
CN114415659A (en) 2022-04-29

Similar Documents

Publication Publication Date Title
US10549430B2 (en) Mapping method, localization method, robot system, and robot
CN110675307B (en) Implementation method from 3D sparse point cloud to 2D grid graph based on VSLAM
EP3283843B1 (en) Generating 3-dimensional maps of a scene using passive and active measurements
CN108007452B (en) Method and device for updating environment map according to obstacle and robot
KR102577785B1 (en) Cleaning robot and Method of performing task thereof
CN111693050B (en) Indoor medium and large robot navigation method based on building information model
US9129523B2 (en) Method and system for obstacle detection for vehicles using planar sensor data
KR20240063820A (en) Cleaning robot and Method of performing task thereof
Schmid et al. Parking space detection with hierarchical dynamic occupancy grids
US10379542B2 (en) Location and mapping device and method
US11119498B2 (en) Systems and methods for simulation utilizing a segmentable monolithic mesh
CN109375629A (en) A kind of cruiser and its barrier-avoiding method that navigates
CN108873014A (en) Mirror surface detection method and device based on laser radar
CN114842106A (en) Method and apparatus for constructing grid map, self-walking apparatus, and storage medium
Chen et al. Design and Implementation of AMR Robot Based on RGBD, VSLAM and SLAM
CN115565058A (en) Robot, obstacle avoidance method, device and storage medium
CN118444301A (en) Annotation of radar profiles of objects
CN114489050A (en) Obstacle avoidance route control method, device, equipment and storage medium for straight line driving
WO2021024685A1 (en) Information processing device, information processing method, information processing program
CN114415659B (en) Robot safety obstacle avoidance method and device, robot and storage medium
CN112182122A (en) Method and device for acquiring navigation map of working environment of mobile robot
CN113298044B (en) Obstacle detection method, system, device and storage medium based on positioning compensation
CN109901589B (en) Mobile robot control method and device
KR20210144404A (en) Apparatus and method for detecting free space
CN115937826B (en) Target detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant