CN114721404B - Obstacle avoidance method, robot and storage medium - Google Patents

Obstacle avoidance method, robot and storage medium Download PDF

Info

Publication number
CN114721404B
CN114721404B CN202210637875.7A CN202210637875A CN114721404B CN 114721404 B CN114721404 B CN 114721404B CN 202210637875 A CN202210637875 A CN 202210637875A CN 114721404 B CN114721404 B CN 114721404B
Authority
CN
China
Prior art keywords
obstacle
point cloud
dimensional point
frame
cloud information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210637875.7A
Other languages
Chinese (zh)
Other versions
CN114721404A (en
Inventor
黄晓辉
区士超
张桢
林智宾
刘晓涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Super Node Innovative Technology Shenzhen Co ltd
Original Assignee
Super Node Innovative Technology Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Super Node Innovative Technology Shenzhen Co ltd filed Critical Super Node Innovative Technology Shenzhen Co ltd
Priority to CN202210637875.7A priority Critical patent/CN114721404B/en
Publication of CN114721404A publication Critical patent/CN114721404A/en
Application granted granted Critical
Publication of CN114721404B publication Critical patent/CN114721404B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Image Analysis (AREA)
  • Manipulator (AREA)

Abstract

The application provides an obstacle avoidance method, a robot and a storage medium, wherein three-dimensional point cloud information of an obstacle to be identified is extracted from acquired environment image data; performing reasoning operation on the environmental image data by using a pre-trained obstacle recognition deep learning model to obtain the category and the position frame of the obstacle to be recognized; combining the three-dimensional point cloud information, the categories and the position frame to determine a detection result of the obstacle to be identified; and finally, avoiding the obstacle according to the detection result. The method and the device have the advantages that the three-dimensional point cloud information, the type and the position frame are combined to determine the detection result of the barrier to be recognized, the robot recognizes the barrier in the environment accurately, and then the barrier is avoided according to the detection result, so that the robot can effectively avoid the barrier and reduce the probability of collision or blocking.

Description

Obstacle avoidance method, robot and storage medium
Technical Field
The present application relates to the field of robotics, and in particular, to an obstacle avoidance method, a robot, and a storage medium.
Background
At present, robots are widely used in home life. This is because the robot can replace people to do some family work, such as sweeping the robot. However, due to the complexity of the home environment, when the sweeping robot is used, the robot can be easily stuck on objects such as clothes, electric wires or a fan base by the existing simple obstacle avoidance technology. Therefore, the incoming AI identification technology is gradually integrated into the obstacle avoidance system of the household robot. However, the conventional AI identification technology generally depends on a two-dimensional picture as input, and is easily interfered by a complex background or a deformed object in a home environment, so that the robot is low in accuracy of identifying the obstacle, and is easily stuck.
Disclosure of Invention
The application provides an obstacle avoidance method, a robot and a storage medium, wherein three-dimensional point cloud information, categories and position frames of obstacles to be recognized are combined, so that the accuracy of the robot in recognizing the obstacles in the environment is improved, and the robot can be effectively prevented from being stuck.
In a first aspect, an embodiment of the present application provides an obstacle avoidance method, where the method includes:
acquiring environment image data, and extracting three-dimensional point cloud information of an obstacle to be identified from the environment image data;
performing reasoning operation on the environmental image data by using a pre-trained obstacle recognition deep learning model to obtain the category and the position frame of the obstacle to be recognized;
determining a detection result of the obstacle to be identified according to the three-dimensional point cloud information, the category and the position frame;
and avoiding the obstacle according to the detection result.
In a second aspect, embodiments of the present application provide a robot, including a memory and a processor;
the memory is used for storing a computer program;
the processor is configured to execute the computer program and, when executing the computer program, implement the steps of the obstacle avoidance method according to the first aspect.
In a third aspect, an embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the processor is enabled to implement the steps of the obstacle avoidance method according to the first aspect.
The embodiment of the application provides an obstacle avoidance method, a robot and a storage medium, wherein three-dimensional point cloud information of an obstacle to be identified is extracted from acquired environment image data; performing reasoning operation on the environmental image data by using a pre-trained obstacle recognition deep learning model to obtain the category and the position frame of the obstacle to be recognized; combining the three-dimensional point cloud information, the categories and the position frame to determine a detection result of the obstacle to be identified; and finally, avoiding the obstacle according to the detection result. The method and the device have the advantages that the detection result of the obstacle to be recognized is determined by combining the three-dimensional point cloud information, the type and the position frame, the robot recognizes the obstacle in the environment accurately, and then the obstacle is avoided according to the detection result, so that the robot can effectively avoid the obstacle and reduce the probability of collision or blockage.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure of the embodiments of the present application.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic view of an application scenario of an obstacle avoidance method according to an embodiment of the present application;
FIG. 2 is a block diagram of the processor of FIG. 1;
fig. 3 is a schematic flowchart of an obstacle identification method according to an embodiment of the present application;
fig. 4 is a schematic block diagram of a robot provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, of the embodiments of the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
The flow diagrams depicted in the figures are merely illustrative and do not necessarily include all of the elements and operations/steps, nor do they necessarily have to be performed in the order depicted. For example, some operations/steps may be decomposed, combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
It should be noted that the obstacle avoidance method, the robot, and the storage medium provided in the present application may be applied to obstacle avoidance of a mobile robot that provides services for various places. The various locations include, but are not limited to, a residence, a station, a mall, a hospital, a school, a senior citizen's institution, a hotel, a scenic spot, a large venue or an office building, etc. Specifically, three-dimensional point cloud information of the obstacle to be identified can be extracted from the environment image data by firstly acquiring the environment image data; performing reasoning operation on the environmental image data by using a pre-trained obstacle recognition deep learning model to obtain the category and the position frame of the obstacle to be recognized; finally, determining a detection result of the barrier to be recognized according to the three-dimensional point cloud information, the category and the position frame; the accuracy of the robot for identifying the obstacles in the environment is improved, and then the obstacles are avoided according to the detection result, so that the robot can be effectively prevented from colliding or being blocked.
As shown in fig. 1, fig. 1 is a schematic view of an application scenario of an obstacle avoidance method according to an embodiment of the present application.
As shown in fig. 1, the obstacle avoidance method provided in the present embodiment is applied to a robot 10, and the robot 10 may be a sweeping robot in a residence, a mall, a school, a hospital, or other rooms. It should be understood that the obstacle avoidance method provided by the embodiment of the application is not limited to a sweeping robot, and can be a mobile service robot such as a leading robot and a meal delivery robot.
For example, the sweeping robot 10 may acquire environment image data, and extract three-dimensional point cloud information of an obstacle to be identified from the environment image data; performing reasoning operation on the environmental image data by using a pre-trained obstacle recognition deep learning model to obtain the category and the position frame of the obstacle to be recognized; combining the three-dimensional point cloud information of the obstacle to be recognized, the type and the position frame of the obstacle to be recognized, and determining the detection result of the obstacle to be recognized; the accuracy of obstacle discernment in order to improve the environment, and then when keeping away the barrier according to the testing result again, improve and keep away the barrier effect, effectively prevent the phenomenon of card.
Specifically, as shown in fig. 2, fig. 2 is a schematic structural diagram of the processor in fig. 1. As can be seen from fig. 2, in an embodiment of the present application, the robot 10 includes a processor 101 and a binocular vision camera 102. The processor 101 includes a micro-Processing module 1011 (Advanced RISC Machine, ARM), a Digital Signal Processing module 1012 (Digital Signal Processing, DSP), and an embedded neural network Processing module 1013 (NPU). Specifically, the ARM1011 is configured to control the binocular vision camera 102 to acquire environment image data, extract three-dimensional point cloud information of an obstacle to be recognized from the acquired environment image data by combining a binocular vision difference algorithm, combine the three-dimensional point cloud information with category and position information of the obstacle to be recognized, obtain a detection result of the obstacle to be recognized, and avoid the obstacle according to the detection result. The DSP1012 is used to perform view matching on the binocular vision camera 102 and calculate a parallax value of the binocular vision camera 102. The NPU1013 is loaded with a pre-trained obstacle recognition deep learning model, and is used for performing reasoning operation on the environmental image data acquired by the ARM1011 by using the pre-trained obstacle recognition deep learning model to obtain the type and position of the obstacle to be recognized.
Specifically, the binocular vision camera 102 may be an active binocular vision camera or a passive binocular vision camera. Speckle structured light, coded structured light or stripe structured light and the like can be used as light sources, two images of the obstacle to be identified in the environmental image data are obtained from different positions by using imaging equipment based on a parallax principle, and three-dimensional point cloud information of the obstacle to be identified is obtained by calculating the position deviation between corresponding points in the two images. The method has the advantages of high imaging resolution, high precision, strong anti-interference of intense light and complex background, long identification distance and the like for the three-dimensional point cloud of the object, but also has the problems of high algorithm complexity and incapability of generating effective three-dimensional point cloud information due to the influence of object characteristics.
Specifically, the pre-trained obstacle recognition deep learning model is a deep learning-based neural network model, and after a preset neural network model is trained and corrected based on a large amount of sample data, the obtained machine recognition model meets the requirements of preset indexes on the obstacle recognition accuracy and the recall rate. The method has the advantages of high recognition efficiency and accuracy, but the recognition effect on complex backgrounds and deformed objects is poor.
The robot provided by the embodiment of the application acquires environment image data through a binocular vision camera, and extracts three-dimensional point cloud information of an obstacle to be identified from the environment image data by combining a binocular three-dimensional vision difference algorithm; carrying out reasoning operation on the environmental image data by using a pre-trained obstacle recognition deep learning model through an embedded neural network processing module to obtain the category and the position frame of the obstacle to be recognized; determining a detection result of the obstacle to be recognized according to the three-dimensional point cloud information, the category and the position frame of the obstacle to be recognized through the micro-processing module; and avoiding the obstacle according to the detection result. The problem of inaccurate identification caused by the fact that a binocular vision camera cannot generate effective three-dimensional point cloud information according to weak characteristics and no characteristics of an obstacle to be identified can be effectively avoided, the problem that a traditional neural network model is poor in identification effect on a complex background and a deformed object can be effectively avoided, the three-dimensional point cloud information and the category and position information identified by the neural network model are combined, and the accuracy of obstacle identification is improved.
Referring to fig. 3, fig. 3 is a schematic flow chart of an obstacle identification method according to an embodiment of the present disclosure. The obstacle recognition method is implemented by the robot shown in fig. 1 to 2. As can be seen from fig. 3, the obstacle identification method provided in the embodiment of the present application includes steps S301 to S304. The details are as follows:
s301, acquiring environment image data, and extracting three-dimensional point cloud information of the obstacle to be identified from the environment image data.
The environment image data is the environment image data in the field range of the robot. When the robot detects that the obstacle to be recognized exists in the visual field range in the driving process, environment image data are obtained, and three-dimensional point cloud information of the obstacle to be recognized is extracted from the obtained environment image data. And avoiding the obstacle by using the three-dimensional point cloud information based on the obstacle to be identified, or combining the three-dimensional point cloud information of the obstacle to be identified with the type and position information of the obstacle to be identified so as to avoid the phenomenon of collision and even blockage.
In one embodiment, the robot acquires environment image data through a binocular vision camera and extracts three-dimensional point cloud information of the obstacle to be recognized by combining a binocular three-dimensional vision difference algorithm.
Under the condition that the object features are obvious, the three-dimensional point cloud information can effectively reflect the three-dimensional geometric information of the obstacle to be identified, and the robot can be effectively helped to avoid the obstacle.
S302, carrying out reasoning operation on the environmental image data by using the pre-trained obstacle recognition deep learning model to obtain the category and the position frame of the obstacle to be recognized.
In one embodiment, the pre-trained obstacle recognition deep learning model is a neural network model which is trained by a large number of data samples and modified, and has high recognition accuracy rate on obstacles in the environment. However, when the obstacle to be recognized is in a complex background or is a deformed object, there is a problem that the recognition accuracy is lowered. The method comprises the steps of carrying out reasoning operation on environmental image data by utilizing a pre-trained obstacle recognition deep learning model, after obtaining the type and the position frame of an obstacle to be recognized, combining the type and the position frame of the obstacle to be recognized with three-dimensional point cloud information to determine the detection result of the obstacle to be recognized, not only effectively avoiding the problem that the recognition accuracy rate is reduced due to a complex background or the obstacle to be recognized is a deformed object, but also solving the problem that the recognition is inaccurate due to the fact that the three-dimensional point cloud information cannot effectively reflect the three-dimensional geometric information of the obstacle to be recognized under the condition that the characteristics of the object are not obvious. The obstacle recognition accuracy can be effectively improved.
And S303, determining a detection result of the obstacle to be recognized according to the three-dimensional point cloud information, the category and the position frame of the obstacle to be recognized.
In one embodiment, determining detection result information of an obstacle to be recognized according to three-dimensional point cloud information, categories and position frames of the obstacle to be recognized includes: determining a contour frame of the obstacle to be identified according to the three-dimensional point cloud information of the obstacle to be identified; determining the overlapping rate of the outline frame and the position frame of the obstacle to be recognized, and determining the detection result of the obstacle to be recognized according to the overlapping rate of the outline frame and the position frame of the obstacle to be recognized.
Specifically, the intersection ratio of the outline frame and the position frame according to the obstacle to be identified may be used as the overlapping rate of the outline frame and the position frame of the obstacle to be identified; and determining the detection result of the obstacle to be recognized according to the intersection ratio of the outline frame and the position frame of the obstacle to be recognized.
For example, assuming that the outline frame of the obstacle to be recognized is a and the position frame is B, the intersection ratio of the outline frame and the position frame of the obstacle to be recognized may be expressed as: (A. andgate.B)/(A. U.B).
In one embodiment, determining an overlapping rate of a contour frame and a position frame of an obstacle to be recognized, and determining a detection result of the obstacle to be recognized according to the overlapping rate of the contour frame and the position frame of the obstacle to be recognized includes: if the overlapping rate of the outline frame and the position frame of the obstacle to be recognized is larger than a first preset threshold value, determining that the detection result of the obstacle to be recognized is that inference operation is carried out on environment image data by using a pre-trained obstacle recognition deep learning model, and obtaining the category of the obstacle to be recognized and three-dimensional point cloud information of the obstacle to be recognized, wherein the three-dimensional point cloud information is extracted from the environment image data.
It should be understood that when the three-dimensional point cloud information of the obstacle to be recognized is obvious, the three-dimensional point cloud information obtained through calculation of the binocular vision difference principle is complete, and the outline frame corresponding to the obstacle to be recognized is accurate. Meanwhile, the object type and the position frame obtained by recognizing the deep learning model by using the obstacle recognition trained in advance are accurate. At the moment, the type of the obstacle to be recognized and the three-dimensional point cloud information of the obstacle to be recognized extracted from the environment image data are correspondingly used as the detection result of the obstacle to be recognized, so that the robot can effectively avoid the obstacle, and the phenomenon of collision or blockage is prevented.
If the overlapping rate of the outline frame and the position frame of the obstacle to be recognized is smaller than or equal to a first preset threshold value, the detection result of the obstacle to be recognized needs to be further determined according to a first ratio of the intersection of the outline frame and the position frame of the obstacle to be recognized to the outline frame of the obstacle to be recognized, or according to a second ratio of the intersection of the outline frame and the position frame of the obstacle to be recognized to the position frame of the obstacle to be recognized.
The first ratio of the intersection of the outline frame and the position frame of the obstacle to be identified and the outline frame of the obstacle to be identified can be represented as: (A ≈ B)/A; and a second ratio of the intersection of the outline frame and the position frame of the obstacle to be recognized to the position frame of the obstacle to be recognized is expressed as: (A ≈ B)/B; wherein, A represents the outline frame of the obstacle to be recognized, and B represents the position frame of the obstacle to be recognized.
Specifically, determining a detection result of the obstacle to be recognized according to a first ratio of an intersection of the outline frame and the position frame of the obstacle to be recognized to the outline frame of the obstacle to be recognized, or a second ratio of the intersection of the outline frame and the position frame of the obstacle to be recognized to the position frame of the obstacle to be recognized, includes: if the first ratio is larger than the second preset threshold value, or the second ratio is larger than the second preset threshold value, determining that the detection result of the obstacle to be recognized is that inference operation is performed on environment image data by using a pre-trained obstacle recognition deep learning model, and obtaining the category of the obstacle to be recognized and three-dimensional point cloud information of the obstacle to be recognized, wherein the category is extracted from the environment image data; and if the first ratio is determined to be smaller than or equal to the second preset threshold and the second ratio is determined to be smaller than or equal to the second preset threshold, determining the detection result of the obstacle to be recognized according to the intersection of the outline frame and the position frame of the obstacle to be recognized.
It should be understood that when the three-dimensional point cloud information of the obstacle to be recognized is not obvious or is in a discrete state, the three-dimensional point cloud information calculated by the binocular vision difference principle may be incomplete or in a discontinuous state, and at this time, a plurality of corresponding outline boxes may exist. For example, when the obstacle to be recognized is a white wall, corresponding three-dimensional point cloud information does not exist; or when the obstacles to be identified are distributed and dispersed, such as paper scraps, disordered earphone cords and the like, the corresponding three-dimensional point cloud information is in a discrete state.
And the object type and the position frame obtained by recognizing the deep learning model by using the obstacle recognition which is trained in advance are more accurate. The first ratio of the intersection of the outline frame and the position frame of the obstacle to be recognized and the outline frame of the obstacle to be recognized or the second ratio of the intersection of the outline frame and the position frame of the obstacle to be recognized and the second preset threshold are correspondingly used as conditions for judging the detection result of the obstacle to be recognized, so that the phenomenon that the outline frame of the obstacle to be recognized cannot be formed when the three-dimensional point cloud information obtained by calculating through the binocular vision difference principle is incomplete or discontinuous, the geometric information of the object cannot be judged directly by using the three-dimensional point cloud information, and the problem that errors occur in object recognition is avoided.
When a first ratio of an intersection of the outline frame and the position frame of the obstacle to be recognized to the outline frame of the obstacle to be recognized and a second ratio of the intersection of the outline frame and the position frame of the obstacle to be recognized to the position frame of the obstacle to be recognized are smaller than or equal to a second preset threshold value, it is indicated that the phenomenon that the three-dimensional point cloud information of the obstacle to be recognized cannot form an effective outline exists, and at the moment, a detection result of the obstacle to be recognized needs to be determined further according to the intersection of the outline frame and the position frame of the obstacle to be recognized.
In addition, when the background of the environment where the obstacle to be recognized is located is complex or the obstacle to be recognized is deformed, a first ratio of an intersection of the outline frame and the position frame of the obstacle to be recognized to the outline frame of the obstacle to be recognized and a second ratio of the intersection of the outline frame and the position frame of the obstacle to be recognized to the position frame of the obstacle to be recognized are smaller than or equal to a second preset threshold value.
Correspondingly, when a first ratio of the intersection of the outline frame and the position frame of the obstacle to be recognized to the outline frame of the obstacle to be recognized and a second ratio of the intersection of the outline frame and the position frame of the obstacle to be recognized to the position frame of the obstacle to be recognized are smaller than or equal to a second preset threshold, a detection result of the obstacle to be recognized needs to be determined further according to the intersection of the outline frame and the position frame of the obstacle to be recognized. So as to improve the detection accuracy of the obstacles to be recognized.
Specifically, determining a detection result of the obstacle to be recognized according to an intersection of the outline frame and the position frame of the obstacle to be recognized includes: if the intersection of the outline frame and the position frame of the obstacle to be identified is empty, determining the state of the three-dimensional point cloud information of the obstacle to be identified; if the state of the three-dimensional point cloud information of the obstacle to be recognized is a discrete state, determining that the detection result of the obstacle to be recognized is the type of the obstacle to be recognized and the statistical three-dimensional point cloud information obtained by carrying out reasoning operation on environment image data by using a pre-trained obstacle recognition deep learning model, wherein the statistical three-dimensional point cloud information is the three-dimensional point cloud information obtained by carrying out statistics on the three-dimensional point cloud information in the position frame of the obstacle to be recognized.
In one embodiment, when the obstacle to be recognized is free of three-dimensional point cloud information or has three-dimensional point cloud information but the three-dimensional point cloud information is in a discrete state, the three-dimensional point cloud information calculated by the binocular vision difference principle is in the discrete state, and an effective outline frame cannot be formed, or when the obstacle to be recognized is in a more complex background or is deformed, an accurate position frame cannot be obtained through a pre-trained obstacle recognition deep learning model, so that the outline frame and the position frame of the obstacle to be recognized are not overlapped.
At this time, if it is determined that the obstacle to be recognized is free of three-dimensional point cloud information or has three-dimensional point cloud information but the three-dimensional point cloud information is in a discrete state, counting the three-dimensional point cloud information in the position frame, and when the effective three-dimensional point cloud information satisfies a certain number, for example, the number of the effective three-dimensional point cloud information is greater than a preset number, calculating the distance of the obstacle to be recognized and the three-dimensional point cloud information corresponding to the edge of the position frame according to the effective three-dimensional point cloud information.
Specifically, d = mean-k × δ.
Wherein d is the distance of the obstacle to be identified, mean is the mean value of the effective three-dimensional point cloud information, K is a constant, and delta is the standard deviation of the effective three-dimensional point cloud information.
And if the obstacle to be recognized is determined to have the three-dimensional point cloud information and the existing three-dimensional point cloud information is in a polymerization state, determining whether a position frame exists, and when the position frame does not exist, correspondingly determining that the recognition result of the obstacle to be recognized is the unknown obstacle and the three-dimensional point cloud information obtained through calculation of the binocular vision difference principle.
It should be understood that when the obstacle to be recognized is determined to be an unknown obstacle, obstacle avoidance can be performed through the three-dimensional point cloud information, and secondary analysis can be further performed on the obstacle to be recognized through the obstacle recognition deep learning model trained in advance, so that the category and the position frame of the obstacle to be recognized can be obtained.
Correspondingly, after determining that the detection result of the obstacle to be recognized is the unknown obstacle and the three-dimensional point cloud information, the method further comprises the following steps: and inputting environmental image data into the pre-trained obstacle recognition deep learning model to perform inference operation to obtain the category and the position frame of the obstacle to be recognized.
In addition, the accuracy rate of reasoning operation of the obstacle to be recognized by the obstacle recognition deep learning model trained in advance can be detected, and the obstacle recognition deep learning model trained in advance is retrained according to the accuracy rate prompt so as to improve the recognition accuracy rate of the obstacle recognition deep learning model.
And S304, avoiding the obstacle according to the detection result.
According to the analysis, the obstacle avoidance method provided by the embodiment of the application comprises the steps of extracting three-dimensional point cloud information of an obstacle to be identified from acquired environment image data; performing reasoning operation on the environmental image data by using a pre-trained obstacle recognition deep learning model to obtain the category and the position frame of the obstacle to be recognized; combining the three-dimensional point cloud information, the categories and the position frame to determine a detection result of the obstacle to be identified; and finally, avoiding the obstacle according to the detection result. The method and the device have the advantages that the detection result of the obstacle to be recognized is determined by combining the three-dimensional point cloud information, the type and the position frame, the robot recognizes the obstacle in the environment accurately, and then the obstacle is avoided according to the detection result, so that the robot can effectively avoid the obstacle and reduce the probability of collision or blockage.
Referring to fig. 4, fig. 4 is a schematic block diagram of a robot provided in the embodiment of the present application.
Illustratively, the robot 10 includes a processor 101 and a memory 103.
Further, as is apparent from the description of the foregoing embodiment, the robot 10 is a service type robot that can move.
Illustratively, the processor 101 and memory 103 are connected by a bus 104, such as an I2C (Inter-integrated Circuit) bus, for example.
The processor 101 is used to provide computing and control capabilities to support the operation of the entire robot. The Processor 101 may be a Central Processing Unit (CPU), and the Processor 101 may also be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. Wherein a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Specifically, the Memory 103 may be a Flash chip, a Read-Only Memory (ROM) magnetic disk, an optical disk, a usb disk, or a removable hard disk.
Those skilled in the art will appreciate that the structure shown in fig. 4 is a block diagram of only a portion of the structure associated with the embodiments of the present application, and does not constitute a limitation on the robot 10 to which the embodiments of the present application may be applied, and a particular robot 10 may include more or fewer components than those shown, or may combine some components, or have a different arrangement of components.
The processor 101 is configured to run a computer program stored in the memory 103, and when executing the computer program, implement the steps of the above-mentioned obstacle avoidance method provided in the embodiment of the present application.
In an embodiment, the processor 101 is adapted to run a computer program stored in the memory 103 of the robot and to carry out the following steps when executing said computer program:
acquiring environment image data, and extracting three-dimensional point cloud information of an obstacle to be identified from the environment image data;
performing reasoning operation on the environmental image data by using a pre-trained obstacle recognition deep learning model to obtain the category and the position frame of the obstacle to be recognized;
determining a detection result of the obstacle to be identified according to the three-dimensional point cloud information, the category and the position frame;
and avoiding the obstacle according to the detection result.
In an embodiment, the determining, according to the three-dimensional point cloud information, the category, and the position frame, detection result information of the obstacle to be identified includes:
determining a contour frame of the obstacle to be identified according to the three-dimensional point cloud information;
and determining the overlapping rate of the outline frame and the position frame, and determining the detection result of the obstacle to be recognized according to the overlapping rate.
In an embodiment, the determining the overlapping rate of the outline frame and the position frame and determining the detection result of the obstacle to be identified according to the overlapping rate includes:
if the overlapping rate is larger than a first preset threshold value, determining that the detection result of the obstacle to be identified is the category and the three-dimensional point cloud information;
and if the overlapping rate is less than or equal to the first preset threshold, determining a detection result of the obstacle to be identified according to a first ratio of the intersection of the outline frame and the position frame to the outline frame, or according to a second ratio of the intersection of the outline frame and the position frame to the position frame.
In one embodiment, the overlap ratio is the intersection ratio of the outline box and the position box.
In an embodiment, the determining a detection result of the obstacle to be identified according to a first ratio of an intersection of the outline frame and the position frame to the outline frame or a second ratio of the intersection of the outline frame and the position frame to the position frame includes:
if the first ratio is larger than a second preset threshold value, or the second ratio is larger than the second preset threshold value, determining that the detection result of the obstacle to be identified is the category and the three-dimensional point cloud information;
and if the first ratio is determined to be smaller than or equal to a second preset threshold and the second ratio is determined to be smaller than or equal to the second preset threshold, determining a detection result of the obstacle to be identified according to the intersection of the outline frame and the position frame.
In an embodiment, the determining, according to an intersection of the outline frame and the position frame, a detection result of the obstacle to be identified includes:
if the intersection of the outline frame and the position frame is empty, determining the state of the three-dimensional point cloud information of the obstacle to be identified;
and if the state of the three-dimensional point cloud information of the obstacle to be identified is a discrete state, determining that the detection result of the obstacle to be identified is the category and the statistical three-dimensional point cloud information, wherein the statistical three-dimensional point cloud information is obtained by performing statistics on the three-dimensional point cloud information in the position frame.
In an embodiment, after the determining the state of the three-dimensional point cloud information of the obstacle to be identified, the method further includes:
and if the state of the three-dimensional point cloud information of the obstacle to be identified is an aggregation state, determining that the detection result of the obstacle to be identified is the unknown obstacle and the three-dimensional point cloud information.
In an embodiment, after the determining that the detection result of the obstacle to be identified is the unknown obstacle and the three-dimensional point cloud information, the method further includes:
and inputting the environmental image data into the pre-trained obstacle recognition deep learning model for reasoning operation to obtain the category and the position frame of the obstacle to be recognized.
It should be understood that the specific principle and implementation manner of the robot provided in this embodiment are the same as those of the implementation process in the obstacle avoidance method in the foregoing embodiment, and are not described herein again.
In addition, a computer-readable storage medium is provided, where a computer program is stored, and when the computer program is executed by a processor, the processor is caused to implement the steps of the obstacle avoidance method.
Acquiring environment image data, and extracting three-dimensional point cloud information of an obstacle to be identified from the environment image data;
performing reasoning operation on the environmental image data by using a pre-trained obstacle recognition deep learning model to obtain the category and the position frame of the obstacle to be recognized;
determining a detection result of the obstacle to be identified according to the three-dimensional point cloud information, the category and the position frame;
and avoiding the obstacle according to the detection result.
In an embodiment, the determining, according to the three-dimensional point cloud information, the category, and the position frame, detection result information of the obstacle to be identified includes:
determining a contour frame of the obstacle to be identified according to the three-dimensional point cloud information;
and determining the overlapping rate of the outline frame and the position frame, and determining the detection result of the obstacle to be recognized according to the overlapping rate.
In an embodiment, the determining the overlapping rate of the outline frame and the position frame and determining the detection result of the obstacle to be identified according to the overlapping rate includes:
if the overlapping rate is larger than a first preset threshold value, determining that the detection result of the obstacle to be identified is the category and the three-dimensional point cloud information;
and if the overlapping rate is less than or equal to the first preset threshold, determining a detection result of the obstacle to be identified according to a first ratio of the intersection of the outline frame and the position frame to the outline frame, or according to a second ratio of the intersection of the outline frame and the position frame to the position frame.
In one embodiment, the overlap ratio is the intersection ratio of the outline box and the position box.
In an embodiment, the determining a detection result of the obstacle to be identified according to a first ratio of an intersection of the outline frame and the position frame to the outline frame or a second ratio of the intersection of the outline frame and the position frame to the position frame includes:
if the first ratio is larger than a second preset threshold value, or the second ratio is larger than the second preset threshold value, determining that the detection result of the obstacle to be identified is the category and the three-dimensional point cloud information;
and if the first ratio is determined to be smaller than or equal to a second preset threshold value, or the second ratio is determined to be smaller than or equal to the second preset threshold value, determining a detection result of the obstacle to be identified according to the intersection of the outline frame and the position frame.
In an embodiment, the determining, according to an intersection of the outline frame and the position frame, a detection result of the obstacle to be identified includes:
if the intersection of the outline frame and the position frame is empty, determining the state of the three-dimensional point cloud information of the obstacle to be identified;
and if the state of the three-dimensional point cloud information of the obstacle to be identified is a discrete state, determining that the detection result of the obstacle to be identified is the category and the statistical three-dimensional point cloud information, wherein the statistical three-dimensional point cloud information is obtained by performing statistics on the three-dimensional point cloud information in the position frame.
In an embodiment, after the determining the state of the three-dimensional point cloud information of the obstacle to be identified, the method further includes:
and if the state of the three-dimensional point cloud information of the obstacle to be identified is an aggregation state, determining that the detection result of the obstacle to be identified is the unknown obstacle and the three-dimensional point cloud information.
In an embodiment, after the determining that the detection result of the obstacle to be identified is the unknown obstacle and the three-dimensional point cloud information, the method further includes:
and inputting the environmental image data into the pre-trained obstacle recognition deep learning model to perform reasoning operation to obtain the category and the position frame of the obstacle to be recognized.
Specifically, the specific principle and implementation manner of each step provided in this embodiment are the same as those of the implementation process in the obstacle avoidance method in the foregoing embodiment, and are not described here again.
The computer-readable storage medium may be an internal storage unit of the robot in the foregoing embodiments, for example, a hard disk or a memory of the robot. The computer-readable storage medium may also be an external storage device of the robot, such as a plug-in hard disk provided on the robot, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like;
it is to be understood that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
It should also be understood that the term "or" as used in this application refers to any and all possible combinations of one or more of the associated listed items and includes such combinations.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think of various equivalent modifications or substitutions within the technical scope of the present application, and these modifications or substitutions should be covered within the scope of the present application.

Claims (7)

1. An obstacle avoidance method, characterized in that the method comprises:
acquiring environment image data, and extracting three-dimensional point cloud information of an obstacle to be identified from the environment image data;
performing reasoning operation on the environmental image data by using a pre-trained obstacle recognition deep learning model to obtain the category and the position frame of the obstacle to be recognized;
determining a contour frame of the obstacle to be identified according to the three-dimensional point cloud information;
determining the overlapping rate of the position frame and the outline frame, and if the overlapping rate of the position frame and the outline frame is greater than a first preset threshold value, determining that the detection result of the obstacle to be recognized is the type and the three-dimensional point cloud information of the obstacle to be recognized, which is extracted from the environment image data;
if the overlapping rate of the position frame and the outline frame is smaller than or equal to the first preset threshold, determining a detection result of the obstacle to be identified according to a first ratio of an intersection of the outline frame and the position frame to the outline frame, or according to a second ratio of the intersection of the outline frame and the position frame to the position frame;
and avoiding the obstacle according to the detection result.
2. The method according to claim 1, wherein the determining the detection result of the obstacle to be identified according to a first ratio of the intersection of the outline frame and the position frame to the outline frame or a second ratio of the intersection of the outline frame and the position frame to the position frame comprises:
if the first ratio is larger than a second preset threshold value, or the second ratio is larger than the second preset threshold value, determining that the detection result of the obstacle to be identified is the category and the three-dimensional point cloud information;
and if the first ratio is determined to be smaller than or equal to a second preset threshold and the second ratio is determined to be smaller than or equal to the second preset threshold, determining a detection result of the obstacle to be identified according to the intersection of the outline frame and the position frame.
3. The method according to claim 2, wherein the determining a detection result of the obstacle to be recognized according to the intersection of the outline box and the position box comprises:
if the intersection of the outline frame and the position frame is empty, determining the state of the three-dimensional point cloud information of the obstacle to be identified;
and if the state of the three-dimensional point cloud information of the obstacle to be identified is a discrete state, determining that the detection result of the obstacle to be identified is the category and the statistical three-dimensional point cloud information, wherein the statistical three-dimensional point cloud information is obtained by performing statistics on the three-dimensional point cloud information in the position frame.
4. The method of claim 3, further comprising, after the determining the state of the three-dimensional point cloud information of the obstacle to be identified:
and if the state of the three-dimensional point cloud information of the obstacle to be identified is an aggregation state, determining that the detection result of the obstacle to be identified is the unknown obstacle and the three-dimensional point cloud information.
5. The method according to claim 4, after determining that the detection result of the obstacle to be identified is an unknown obstacle and the three-dimensional point cloud information, further comprising:
and inputting the environmental image data into the pre-trained obstacle recognition deep learning model for reasoning operation to obtain the category and the position frame of the obstacle to be recognized.
6. A robot comprising a memory and a processor;
the memory is used for storing a computer program;
the processor is configured to execute the computer program and, when executing the computer program, implement the steps of the obstacle avoidance method according to any one of claims 1 to 5.
7. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program, which, when executed by a processor, causes the processor to implement the steps of the obstacle avoidance method according to any one of claims 1 to 5.
CN202210637875.7A 2022-06-08 2022-06-08 Obstacle avoidance method, robot and storage medium Active CN114721404B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210637875.7A CN114721404B (en) 2022-06-08 2022-06-08 Obstacle avoidance method, robot and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210637875.7A CN114721404B (en) 2022-06-08 2022-06-08 Obstacle avoidance method, robot and storage medium

Publications (2)

Publication Number Publication Date
CN114721404A CN114721404A (en) 2022-07-08
CN114721404B true CN114721404B (en) 2022-09-13

Family

ID=82233101

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210637875.7A Active CN114721404B (en) 2022-06-08 2022-06-08 Obstacle avoidance method, robot and storage medium

Country Status (1)

Country Link
CN (1) CN114721404B (en)

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108734058A (en) * 2017-04-18 2018-11-02 百度在线网络技术(北京)有限公司 Obstacle identity recognition methods, device, equipment and storage medium
CN110738081A (en) * 2018-07-19 2020-01-31 杭州海康威视数字技术股份有限公司 Abnormal road condition detection method and device
KR102147187B1 (en) * 2019-12-03 2020-08-24 네오컨버전스 주식회사 Method and apparatus for de-identificationing personal data on video sequentially based on deep learning
CN111752285A (en) * 2020-08-18 2020-10-09 广州市优普科技有限公司 Autonomous navigation method and device for quadruped robot, computer equipment and storage medium
CN112464812A (en) * 2020-11-27 2021-03-09 西北工业大学 Vehicle-based sunken obstacle detection method
CN112528773A (en) * 2020-11-27 2021-03-19 深兰科技(上海)有限公司 Obstacle information fusion method and device, electronic equipment and storage medium
CN112528884A (en) * 2020-12-16 2021-03-19 上海悠络客电子科技股份有限公司 Fire fighting access obstacle intelligent detection method based on deep learning method
CN112650235A (en) * 2020-03-11 2021-04-13 南京奥拓电子科技有限公司 Robot obstacle avoidance control method and system and robot
KR20210072689A (en) * 2019-12-09 2021-06-17 주식회사 업데이터 Method for creating obstruction detection model using deep learning image recognition and apparatus thereof
CN113569812A (en) * 2021-08-31 2021-10-29 东软睿驰汽车技术(沈阳)有限公司 Unknown obstacle identification method and device and electronic equipment
WO2021226876A1 (en) * 2020-05-13 2021-11-18 华为技术有限公司 Target detection method and apparatus
CN113836975A (en) * 2020-06-24 2021-12-24 天津工业大学 Binocular vision unmanned aerial vehicle obstacle avoidance method based on YOLOV3
CN114119729A (en) * 2021-11-17 2022-03-01 北京埃福瑞科技有限公司 Obstacle identification method and device
CN114266960A (en) * 2021-12-01 2022-04-01 国网智能科技股份有限公司 Point cloud information and deep learning combined obstacle detection method
CN114387535A (en) * 2021-12-08 2022-04-22 宁波职业技术学院 Multi-mode identification system and blind person glasses
CN114463362A (en) * 2021-12-29 2022-05-10 宜昌测试技术研究所 Three-dimensional collision avoidance sonar obstacle detection method and system based on deep learning

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108734058A (en) * 2017-04-18 2018-11-02 百度在线网络技术(北京)有限公司 Obstacle identity recognition methods, device, equipment and storage medium
CN110738081A (en) * 2018-07-19 2020-01-31 杭州海康威视数字技术股份有限公司 Abnormal road condition detection method and device
KR102147187B1 (en) * 2019-12-03 2020-08-24 네오컨버전스 주식회사 Method and apparatus for de-identificationing personal data on video sequentially based on deep learning
KR20210072689A (en) * 2019-12-09 2021-06-17 주식회사 업데이터 Method for creating obstruction detection model using deep learning image recognition and apparatus thereof
CN112650235A (en) * 2020-03-11 2021-04-13 南京奥拓电子科技有限公司 Robot obstacle avoidance control method and system and robot
WO2021226876A1 (en) * 2020-05-13 2021-11-18 华为技术有限公司 Target detection method and apparatus
CN113836975A (en) * 2020-06-24 2021-12-24 天津工业大学 Binocular vision unmanned aerial vehicle obstacle avoidance method based on YOLOV3
CN111752285A (en) * 2020-08-18 2020-10-09 广州市优普科技有限公司 Autonomous navigation method and device for quadruped robot, computer equipment and storage medium
CN112464812A (en) * 2020-11-27 2021-03-09 西北工业大学 Vehicle-based sunken obstacle detection method
CN112528773A (en) * 2020-11-27 2021-03-19 深兰科技(上海)有限公司 Obstacle information fusion method and device, electronic equipment and storage medium
CN112528884A (en) * 2020-12-16 2021-03-19 上海悠络客电子科技股份有限公司 Fire fighting access obstacle intelligent detection method based on deep learning method
CN113569812A (en) * 2021-08-31 2021-10-29 东软睿驰汽车技术(沈阳)有限公司 Unknown obstacle identification method and device and electronic equipment
CN114119729A (en) * 2021-11-17 2022-03-01 北京埃福瑞科技有限公司 Obstacle identification method and device
CN114266960A (en) * 2021-12-01 2022-04-01 国网智能科技股份有限公司 Point cloud information and deep learning combined obstacle detection method
CN114387535A (en) * 2021-12-08 2022-04-22 宁波职业技术学院 Multi-mode identification system and blind person glasses
CN114463362A (en) * 2021-12-29 2022-05-10 宜昌测试技术研究所 Three-dimensional collision avoidance sonar obstacle detection method and system based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于机器视觉的无人机避障系统研究;杨娟娟等;《中国农机化学报》;20200215(第02期);161-166 *

Also Published As

Publication number Publication date
CN114721404A (en) 2022-07-08

Similar Documents

Publication Publication Date Title
US11422261B2 (en) Robot relocalization method and apparatus and robot using the same
CN110807350B (en) System and method for scan-matching oriented visual SLAM
US10395377B2 (en) Systems and methods for non-obstacle area detection
EP3627180A1 (en) Sensor calibration method and device, computer device, medium, and vehicle
CN110587597B (en) SLAM closed loop detection method and detection system based on laser radar
JP6710426B2 (en) Obstacle detection method and device
US11393256B2 (en) Method and device for liveness detection, and storage medium
CN111145214A (en) Target tracking method, device, terminal equipment and medium
CN110442120B (en) Method for controlling robot to move in different scenes, robot and terminal equipment
CN113936198B (en) Low-beam laser radar and camera fusion method, storage medium and device
US10945888B2 (en) Intelligent blind guide method and apparatus
EP3702957A1 (en) Target detection method and apparatus, and computer device
CN111382637B (en) Pedestrian detection tracking method, device, terminal equipment and medium
US20190331767A1 (en) Charging station identifying method, device, and robot
US20200334887A1 (en) Mobile robots to generate occupancy maps
EP3703008A1 (en) Object detection and 3d box fitting
US20200042005A1 (en) Obstacle avoidance method and system for robot and robot using the same
CN111136648A (en) Mobile robot positioning method and device and mobile robot
CN112183476A (en) Obstacle detection method and device, electronic equipment and storage medium
CN115147333A (en) Target detection method and device
AU2021203821A1 (en) Methods, devices, apparatuses and storage media of detecting correlated objects involved in images
CN115424245A (en) Parking space identification method, electronic device and storage medium
CN114721404B (en) Obstacle avoidance method, robot and storage medium
CN115428043A (en) Image recognition device and image recognition method
CN111291749B (en) Gesture recognition method and device and robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant