CN111142514B - Robot and obstacle avoidance method and device thereof - Google Patents

Robot and obstacle avoidance method and device thereof Download PDF

Info

Publication number
CN111142514B
CN111142514B CN201911268466.9A CN201911268466A CN111142514B CN 111142514 B CN111142514 B CN 111142514B CN 201911268466 A CN201911268466 A CN 201911268466A CN 111142514 B CN111142514 B CN 111142514B
Authority
CN
China
Prior art keywords
ground
plane
candidate
point cloud
dimensional space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911268466.9A
Other languages
Chinese (zh)
Other versions
CN111142514A (en
Inventor
黄明强
刘志超
白龙彪
毕占甲
庞建新
熊友军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ubtech Robotics Corp
Original Assignee
Ubtech Robotics Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ubtech Robotics Corp filed Critical Ubtech Robotics Corp
Priority to CN201911268466.9A priority Critical patent/CN111142514B/en
Publication of CN111142514A publication Critical patent/CN111142514A/en
Application granted granted Critical
Publication of CN111142514B publication Critical patent/CN111142514B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0248Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0251Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • G05D1/027Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means comprising intertial navigation means, e.g. azimuth detector
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Automation & Control Theory (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Electromagnetism (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Optics & Photonics (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Manipulator (AREA)

Abstract

The robot obstacle avoidance method comprises the following steps: acquiring a three-dimensional space point cloud, and dividing planes contained in the three-dimensional space point cloud to serve as candidate ground sets; obtaining a ground normal vector, screening and comparing planes in the candidate ground set according to the ground normal vector and the ground height at the last moment, and determining the planes in the candidate ground set corresponding to the ground; deleting the point cloud corresponding to the selected ground from the acquired three-dimensional space point clouds to obtain an obstacle point cloud, and generating a laser frame according to the obstacle point cloud; and carrying out local obstacle avoidance according to the laser frame and the selected ground. The method can update the three-dimensional space point cloud in real time, and correspondingly update the position of the obstacle in the scene in real time, so that the method can be effectively applied to the obstacle avoidance of the humanoid robot, and the obstacle avoidance precision of the humanoid robot is improved.

Description

Robot and obstacle avoidance method and device thereof
Technical Field
The application belongs to the field of robots, and particularly relates to a robot and an obstacle avoidance method and device thereof.
Background
In order to ensure that the robot can safely and reliably move to the target position, obstacle avoidance planning is generally required to be performed on the robot. For the traditional mobile robot, obstacle avoidance can be effectively performed in the walking process of the robot through single-plane obstacle detection. However, because the humanoid robot is higher in height, the single-plane obstacle detection method of the traditional mobile robot cannot be applied to the humanoid robot, and meanwhile, the cost of the laser radar is high, so that a depth camera is generally adopted on the humanoid robot to avoid the obstacle in space.
At present, two types of space obstacle avoidance methods of the humanoid robot exist, one type is that three-dimensional point cloud and three-dimensional obstacle avoidance algorithm are directly used, and the method is accurate in obstacle avoidance and long in calculation time; the other type is to project the three-dimensional point cloud to the ground where the robot is located to form two-dimensional laser frame data, and then use a plane obstacle avoidance algorithm, so that the method is simple, high in efficiency and widely used on robots with higher heights. The second method generally assumes that the height and pitch angle of the camera from the ground are unchanged or very small, which is easy to meet on a mobile robot, however, the gravity center position of the humanoid robot is greatly changed in the walking process, and the body posture of the robot is also greatly changed, for example, when the body leans forwards and backwards, so that the assumption that the ground height and pitch angle are unchanged or very small is no longer applicable, and the detection of obstacles in the environment cannot be accurately performed.
Disclosure of Invention
In view of this, the embodiment of the application provides a robot and an obstacle avoidance method and device thereof, so as to solve the problem that the humanoid robot in the prior art cannot accurately detect obstacles in the environment due to large changes of the gravity center position and the body posture in the walking process.
A first aspect of an embodiment of the present application provides a robot obstacle avoidance method, including:
acquiring a three-dimensional space point cloud, and dividing planes contained in the three-dimensional space point cloud to serve as candidate ground sets;
obtaining a ground normal vector, screening and comparing planes in the candidate ground set according to the ground normal vector and the ground height at the last moment, and determining the candidate ground set plane corresponding to the ground;
deleting the point cloud corresponding to the selected ground from the acquired three-dimensional space point clouds to obtain an obstacle point cloud, and generating a laser frame according to the obstacle point cloud;
and carrying out local obstacle avoidance according to the laser frame and the selected ground.
With reference to the first aspect, in a first possible implementation manner of the first aspect, the step of segmenting a plane included in the three-dimensional space point cloud as a candidate ground set includes:
estimating a normal vector of a plane in the three-dimensional space point cloud;
and dividing the plane in the point cloud into candidate ground sets according to the estimated normal vector by combining a region growing division method.
With reference to the first aspect, in a second possible implementation manner of the first aspect, before the step of performing a screening comparison on the planes in the candidate ground set according to the ground normal vector and the ground height at the previous time, the method further includes:
and determining plane parameters in the candidate ground set by a random consistency sampling method to obtain a plane equation corresponding to the planes in the candidate ground set.
With reference to the first aspect or the second possible implementation manner of the first aspect, in a third possible implementation manner of the first aspect, the step of determining the candidate ground centralized plane corresponding to the ground by performing screening comparison on the planes in the candidate ground set according to the ground normal vector and the ground height at the last moment includes:
determining a normal vector of a plane in the candidate ground set according to a plane equation of the plane in the candidate ground set;
screening and comparing the normal vector of the plane in the candidate ground with the ground normal vector to obtain a first candidate plane set;
and acquiring the height of the planes in the first candidate plane set, and screening and comparing the height of the ground with the height of the ground at the last moment to determine the planes in the first candidate plane set corresponding to the ground.
With reference to the first aspect, in a fourth possible implementation manner of the first aspect, the step of obtaining a ground normal vector includes:
acquiring acceleration and angular velocity of the robot through an inertial measurement module;
and determining the ground normal vector according to the acquired acceleration and angular velocity.
With reference to the first aspect, in a fifth possible implementation manner of the first aspect, the step of generating a laser frame according to the obstacle point cloud includes:
projecting the obstacle point cloud to a plane corresponding to the ground according to the plane corresponding to the ground in the determined three-dimensional space point cloud, so as to obtain projection information of the obstacle on the plane corresponding to the ground;
and determining obstacle information in the generated laser frame according to the projection information.
With reference to the first aspect, in a sixth possible implementation manner of the first aspect, the step of performing local obstacle avoidance according to the laser frame and the selected ground includes:
and carrying out local obstacle avoidance according to the plane corresponding to the ground and the laser frame and by combining a time elastic band TEB algorithm.
A second aspect of embodiments of the present application provides a robot obstacle avoidance device, the robot obstacle avoidance device comprising:
the candidate ground set generation unit is used for acquiring a three-dimensional space point cloud and dividing planes contained in the three-dimensional space point cloud to serve as candidate ground sets;
the ground determining unit is used for acquiring a ground normal vector, screening and comparing planes in the candidate ground set according to the ground normal vector and the ground height at the last moment, and determining the candidate ground set plane corresponding to the ground;
the laser frame generation unit is used for deleting the point cloud corresponding to the selected ground from the acquired three-dimensional space point clouds to obtain an obstacle point cloud, and generating a laser frame according to the obstacle point cloud;
and the obstacle avoidance unit is used for carrying out local obstacle avoidance according to the laser frame and the selected ground.
A third aspect of the embodiments of the present application provides a robot comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the robot obstacle avoidance method according to any one of the first aspects when the computer program is executed.
A fourth aspect of the embodiments of the present application provides a computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the robot obstacle avoidance method according to any one of the first aspects.
Compared with the prior art, the embodiment of the application has the beneficial effects that: according to the collected three-dimensional space point cloud, a candidate ground set is generated by planes contained in the device, the candidate ground set is screened according to a ground normal vector and the ground height at the last moment, the plane corresponding to the ground in the candidate ground set is determined, three-dimensional space points corresponding to the ground are deleted from the three-dimensional space point cloud, an obstacle point cloud is obtained, a laser frame is generated according to the obstacle point cloud, and local obstacle avoidance is performed through the laser frame and the selected ground.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required for the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic implementation flow diagram of a robot obstacle avoidance method provided in an embodiment of the present application;
fig. 2 is a schematic implementation flow chart of a method for generating a candidate ground set according to an embodiment of the present application;
fig. 3 is a schematic flow chart of determining a plane corresponding to a ground according to an embodiment of the present application;
fig. 4 is a schematic view of a robot obstacle avoidance device according to an embodiment of the present disclosure;
fig. 5 is a schematic view of a robot provided in an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system configurations, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
In order to illustrate the technical solutions described in the present application, the following description is made by specific examples.
Fig. 1 is a schematic implementation flow chart of an obstacle avoidance method for a humanoid robot according to an embodiment of the present application, and is described in detail below:
in step S101, a three-dimensional space point cloud is obtained, and a plane included in the three-dimensional space point cloud is segmented as a candidate ground set;
specifically, the robot may be provided with a color camera for capturing a planar image and a depth camera for capturing a depth image. The color image collected by the color camera and the depth image collected by the depth camera can be registered, and three-dimensional space coordinates (X, Y, Z) corresponding to each pixel point (u, v) in the color image are obtained.
For example, when the camera focal length (fx, fy) of the depth camera and the optical center image coordinates are (cx, cy), determining the three-dimensional coordinates of the pixel point in the depth image according to the camera parameters of the camera may be: and determining three-dimensional coordinates (X, Y, Z) of the pixel point (u, v) in the depth image, wherein Z is a depth value of each pixel point (u, v) acquired by a depth camera.
Wherein the optical axis image coordinates may be points at which the optical axis intersects the CCD camera imaging plane through the lens.
After the point cloud included in the three-dimensional space is obtained, planes included in the image can be determined according to the point cloud of the three-dimensional space, and a candidate plane set can be generated, as shown in fig. 2, including:
in step S201, estimating a normal vector of a plane in the three-dimensional space point cloud;
according to the obtained three-dimensional space point cloud, a plurality of planes positioned in the three-dimensional space and formed by the point cloud can be obtained, and according to every three or more three-dimensional space points on the determined plane of the three-dimensional space, the normal vector corresponding to the plane, namely the vector perpendicular to the plane formed by the point cloud, can be determined. Because the three-dimensional space point cloud is a real and effective space point, the direction information of the three-dimensional space plane can be effectively determined through the normal vector of the plane determined by the three-dimensional space point cloud.
In step S202, dividing a plane in the point cloud into candidate ground sets according to the estimated normal vector in combination with the region growing division method;
after determining the plane of each three-dimensional space point cloud, the normal vector of the plane formed by the three-dimensional space point clouds can be obtained, and the planes in the three-dimensional space point clouds are segmented through the normal vector. When the plane in the three-dimensional space point cloud is segmented, the plane can be determined according to similarity of normal directions and the magnitude of curvature. For example, whether the point clouds are in the same plane may be determined by comparing the normal vector change angle value to be less than a preset angle threshold and the curvature to be less than a preset curvature threshold.
It can be understood that the curvature threshold may be used alone to segment the plane of the three-dimensional space point cloud, or the angle threshold of the angle change of the normal vector may be used alone to segment the plane of the three-dimensional space point cloud, so as to obtain the candidate ground set.
By dividing the point cloud into different planes, the planes included in the three-dimensional space can be determined. Since the planes in the three-dimensional space image may include planes of other space objects in addition to the ground, the segmented planes may need to be further screened, and thus the determined planes may be used as the first candidate ground set.
Of course, in determining the candidate ground set, the planes included in the three-dimensional space may also be determined by other plane segmentation algorithms.
After the candidate ground set is determined, further parameters of the plane equation of the plane in the candidate ground set may be further determined in step S203, so that a more accurate determination of the plane that may be the ground is enabled.
In step S203, plane parameters in the candidate ground set are determined by a random consistency sampling method, so as to obtain a plane equation corresponding to a plane in the candidate ground set.
The process of determining the plane parameters of the candidate ground set by the random consistency sampling method can be as follows:
for each plane of the candidate ground set, a plane equation with undetermined preset parameters can be selected, and a plurality of three-dimensional space points in the plane are randomly selected to calculate the parameters in the plane equation. The remaining set of points in the plane is then selected to verify that the plane equation is satisfied and the number of three-dimensional spatial points that satisfy the plane equation are recorded. Repeating the steps, randomly selecting parameters in a plurality of plane equations determined by a plurality of pixel points in a plane a plurality of times, and recording the number of three-dimensional space points of the plane equations conforming to the determined parameters. And selecting a plane equation with the maximum three-dimensional space points, recalculating plane parameters through a least square method, and determining a plane equation corresponding to the plane according to the calculated plane parameters, so that the planes in the candidate ground set can be conveniently compared and screened, for example, the normal vector of the plane can be obtained through the plane equation, or the height of the plane can be obtained through the plane equation.
In step S102, a ground normal vector is obtained, and according to the ground normal vector and the ground height at the previous moment, planes in the candidate ground set are screened and compared to determine the candidate ground set plane corresponding to the ground;
the step of acquiring the ground normal vector can be determined by acquiring the acceleration and the angular velocity of the robot through an inertial measurement module.
Since the ground normal vector is generally a gravity direction, which is constant, only the change information of the robot itself needs to be determined, and the ground normal vector is determined by the change information of the robot itself.
For example, when the inclination of the robot is detected by the angular velocity sensor, and the inclination angle is detected, the ground normal vector in the image acquired by the robot is determined according to the inclination angle, and the robot is inclined by a certain angle accordingly. From the acceleration of the robot, a change in the position of the robot, including a change in the height of the robot in the spatial position, may be detected, from which a change in the position of the ground normal vector may be determined.
And screening the candidate ground set, namely the planes in the candidate ground set according to the determined ground normal vector and combining the ground height determined at the last moment, and selecting the plane which is most matched with the ground characteristic as the plane of the ground corresponding to the candidate ground set.
The ground corresponds to a plane in which the candidate ground is concentrated, the ground height of the robot is matched with the ground height determined at the last moment in the walking process of the robot, and the normal vector of the plane is matched with the normal vector of the ground. The ground height is matched with the ground height determined at the previous moment, or the normal vector of the plane is matched with the normal vector of the ground, and the difference value between the two can be smaller than a preset threshold value.
By comparing the plurality of planes to the ground level determined at the previous time, and to the ground normal vector, one or more planes may be obtained. When a plurality of planes are obtained, a plane with the best matching degree can be selected from the obtained plurality of planes as a plane corresponding to the ground.
In one implementation, the determining the plane corresponding to the ground may include, in a manner as shown in fig. 3:
in step S301, determining a normal vector of a plane in the candidate ground set according to a plane equation of the plane in the candidate ground set;
after determining the plane equations in the candidate ground set, the normal vector for each plane may be determined from the planar manner.
For example, the plane manner for one of the planes in the candidate ground set is: ax+by+cz+d=0, (a, b, c are not 0 at the same time) its normal vector can be determined as (a, b, c).
In step S302, screening and comparing the normal vector of the plane in the candidate ground with the ground normal vector to obtain a first candidate plane set;
the direction of the plane in the candidate ground set can be determined by calculating the normal vector of the plane in the candidate ground set, and the angle between the normal vector of the plane in the candidate ground set and the ground normal vector can be determined by comparing the direction of the plane in the candidate ground set with the ground normal vector acquired in advance. And comparing and screening planes in the threshold ground set according to a preset angle threshold. For example, planes in the candidate ground set having an angle between the normal vector of the plane and the ground normal vector greater than the angle threshold may be filtered out to obtain a first candidate plane set in the candidate ground set that is parallel, or substantially parallel, to the ground.
In step S303, the height of the plane in the first candidate plane set is obtained, and compared with the height of the ground at the previous moment in a screening manner, the plane in the first candidate plane set corresponding to the ground is determined.
And further carrying out height screening on the first candidate plane set, and combining the height of the ground at the last moment, searching the plane in the first candidate plane set which is most matched with the height of the ground, and determining the plane as the plane corresponding to the ground in the candidate plane set.
Of course, the planes in the candidate plane set may be initially screened by the ground height, and then the planes corresponding to the ground in the candidate plane set may be obtained by matching the plane normal vector with the ground normal vector.
In step S103, deleting a point cloud corresponding to the selected ground from the acquired three-dimensional space point clouds to obtain an obstacle point cloud, and generating a laser frame according to the obstacle point cloud;
and deleting the three-dimensional space point corresponding to the ground in the three-dimensional space point cloud after the plane corresponding to the ground is selected in the candidate ground set according to the ground height and the ground normal vector at the last moment, so as to obtain the obstacle point cloud. The obstacle point cloud can be projected on a plane corresponding to the ground, and a laser frame corresponding to the obstacle point cloud is generated.
The laser frame is data output by the laser radar and comprises laser scanning time, scanning range and distance from an obstacle to a laser center point on each scanning angle. The scanning time of the laser frame can be determined according to the time of each image acquisition, the scanning range of the laser frame is determined according to the azimuth corresponding to the image acquisition, and the distance included in the laser frame is determined according to the calculated position of the plane projected to the ground and the distance between the position of the robot and the projection position.
In step S104, local obstacle avoidance is performed according to the laser frame and the selected ground.
After the plane of the ground in the walking process of the robot and the laser frame (including the information of the distance between the robot and the obstacle and the like) are determined, the information of the distance from the robot to the obstacle, the distance from the robot to the end point, the running time, the speed and the acceleration of the robot and the like can be optimized by adopting a time elastic band TEB (time elastic band) algorithm, so that a path which is far away from the obstacle and reaches the end point is obtained, which is shorter and accords with the acceleration and the speed limit of the robot, and obstacle avoidance of the robot can be effectively realized.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic of each process, and should not limit the implementation process of the embodiment of the present application in any way.
Fig. 4 is a schematic structural diagram of a robot obstacle avoidance device according to an embodiment of the present application, and is described in detail below:
the robot obstacle avoidance device described in fig. 4 includes:
a candidate ground set generating unit 401, configured to obtain a three-dimensional space point cloud, and segment a plane included in the three-dimensional space point cloud as a candidate ground set;
the ground determining unit 402 is configured to obtain a ground normal vector, and perform screening comparison on the planes in the candidate ground set according to the ground normal vector and the ground height at the previous time, so as to determine the candidate ground set plane corresponding to the ground;
a laser frame generating unit 403, configured to delete a point cloud corresponding to the selected ground from the acquired three-dimensional space point clouds, obtain an obstacle point cloud, and generate a laser frame according to the obstacle point cloud;
and the obstacle avoidance unit 404 is configured to perform local obstacle avoidance according to the laser frame and the selected ground.
The robot obstacle avoidance device shown in fig. 4 corresponds to the robot obstacle avoidance method shown in fig. 1.
Fig. 5 is a schematic view of a robot according to an embodiment of the present application. As shown in fig. 5, the robot 5 of this embodiment includes: a processor 50, a memory 51 and a computer program 52, such as a robot obstacle avoidance program, stored in the memory 51 and executable on the processor 50. The processor 50, when executing the computer program 52, implements the steps of the various robot obstacle avoidance method embodiments described above. Alternatively, the processor 50, when executing the computer program 52, performs the functions of the modules/units of the apparatus embodiments described above.
By way of example, the computer program 52 may be partitioned into one or more modules/units that are stored in the memory 51 and executed by the processor 50 to complete the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing a specific function for describing the execution of the computer program 52 in the robot 5. For example, the computer program 52 may be partitioned into:
the candidate ground set generation unit is used for acquiring a three-dimensional space point cloud and dividing planes contained in the three-dimensional space point cloud to serve as candidate ground sets;
the ground determining unit is used for acquiring a ground normal vector, screening and comparing planes in the candidate ground set according to the ground normal vector and the ground height at the last moment, and determining the candidate ground set plane corresponding to the ground;
the laser frame generation unit is used for deleting the point cloud corresponding to the selected ground from the acquired three-dimensional space point clouds to obtain an obstacle point cloud, and generating a laser frame according to the obstacle point cloud;
and the obstacle avoidance unit is used for carrying out local obstacle avoidance according to the laser frame and the selected ground.
The robot may include, but is not limited to, a processor 50, a memory 51. It will be appreciated by those skilled in the art that fig. 5 is merely an example of a robot 5 and is not meant to be limiting of the robot 5, and may include more or fewer components than shown, or may combine certain components, or different components, e.g., the robot may also include input and output devices, network access devices, buses, etc.
The processor 50 may be a central processing unit (Central Processing Unit, CPU), other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 51 may be an internal storage unit of the robot 5, such as a hard disk or a memory of the robot 5. The memory 51 may be an external storage device of the robot 5, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the robot 5. Further, the memory 51 may also include both an internal memory unit and an external memory device of the robot 5. The memory 51 is used for storing the computer program and other programs and data required by the robot. The memory 51 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other manners. For example, the apparatus/terminal device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical function division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.

Claims (9)

1. A robot obstacle avoidance method, comprising:
acquiring a three-dimensional space point cloud, and dividing planes contained in the three-dimensional space point cloud to serve as candidate ground sets;
obtaining a ground normal vector, screening and comparing planes in the candidate ground set according to the ground normal vector and the ground height at the last moment, and determining a plane which is most matched with the ground characteristic in the candidate ground set and corresponds to the ground;
deleting the point cloud corresponding to the selected ground from the acquired three-dimensional space point clouds to obtain an obstacle point cloud, and generating a laser frame according to the obstacle point cloud;
performing local obstacle avoidance according to the laser frame and the selected ground;
before the step of screening and comparing the planes in the candidate ground set according to the ground normal vector and the ground height at the last moment, the method further comprises:
selecting a plane equation with a preset parameter to be determined for each plane of the candidate ground set, randomly selecting a plurality of three-dimensional space points in the plane, calculating parameters in the plane equation, selecting the rest point set in the plane to verify whether the plane equation is met, recording the number of the three-dimensional space points meeting the plane equation, repeating the steps, randomly selecting the parameters in the plane equation determined by a plurality of pixel points in the plane for a plurality of times, recording the number of the three-dimensional space points of the plane equation meeting the determined parameters, selecting the plane equation with the largest number of three-dimensional space points, recalculating the plane parameters by a least square method, and determining the plane equation corresponding to the plane according to the calculated plane parameters.
2. The robot obstacle avoidance method of claim 1 wherein said step of segmenting planes contained in said three-dimensional point cloud as candidate ground sets comprises:
estimating a normal vector of a plane in the three-dimensional space point cloud;
and dividing the plane in the point cloud into candidate ground sets according to the estimated normal vector by combining a region growing division method.
3. The robot obstacle avoidance method of claim 1 wherein said step of screening and comparing the planes in said candidate ground set based on said ground normal vector and the ground height at the previous time to determine the plane in said candidate ground set that most closely matches the ground characteristics corresponding to the ground comprises:
determining a normal vector of a plane in the candidate ground set according to a plane equation of the plane in the candidate ground set;
screening and comparing the normal vector of the plane in the candidate ground with the ground normal vector to obtain a first candidate plane set;
and acquiring the height of the planes in the first candidate plane set, and screening and comparing the height of the ground with the height of the ground at the last moment to determine the planes in the first candidate plane set corresponding to the ground.
4. The robot obstacle avoidance method of claim 1 wherein the step of obtaining the ground normal vector comprises:
acquiring acceleration and angular velocity of the robot through an inertial measurement module;
and determining the ground normal vector according to the acquired acceleration and angular velocity.
5. The robot obstacle avoidance method of claim 1 wherein the step of generating a laser frame from the obstacle point cloud comprises:
projecting the obstacle point cloud to a plane corresponding to the ground according to the plane corresponding to the ground in the determined three-dimensional space point cloud, so as to obtain projection information of the obstacle on the plane corresponding to the ground;
and determining obstacle information in the generated laser frame according to the projection information.
6. The robotic obstacle avoidance method of claim 1 wherein the step of locally avoiding an obstacle from the laser frame and the selected ground comprises:
and carrying out local obstacle avoidance according to the plane corresponding to the ground and the laser frame and by combining a time elastic band TEB algorithm.
7. A robotic obstacle avoidance device, the robotic obstacle avoidance device comprising:
the candidate ground set generation unit is used for acquiring a three-dimensional space point cloud and dividing planes contained in the three-dimensional space point cloud to serve as candidate ground sets;
the ground determining unit is used for acquiring a ground normal vector, screening and comparing planes in the candidate ground set according to the ground normal vector and the ground height at the last moment, and determining the plane which is most matched with the ground characteristic in the candidate ground set and corresponds to the ground;
the laser frame generation unit is used for deleting the point cloud corresponding to the selected ground from the acquired three-dimensional space point clouds to obtain an obstacle point cloud, and generating a laser frame according to the obstacle point cloud;
the obstacle avoidance unit is used for carrying out local obstacle avoidance according to the laser frame and the selected ground;
the robot keeps away barrier device still is used for:
selecting a plane equation with a preset parameter to be determined for each plane of the candidate ground set, randomly selecting a plurality of three-dimensional space points in the plane, calculating parameters in the plane equation, selecting the rest point set in the plane to verify whether the plane equation is met, recording the number of the three-dimensional space points meeting the plane equation, repeating the steps, randomly selecting the parameters in the plane equation determined by a plurality of pixel points in the plane for a plurality of times, recording the number of the three-dimensional space points of the plane equation meeting the determined parameters, selecting the plane equation with the largest number of three-dimensional space points, recalculating the plane parameters by a least square method, and determining the plane equation corresponding to the plane according to the calculated plane parameters.
8. A robot comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor, when executing the computer program, realizes the steps of the robot obstacle avoidance method according to any one of claims 1 to 6.
9. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the robot obstacle avoidance method of any one of claims 1 to 6.
CN201911268466.9A 2019-12-11 2019-12-11 Robot and obstacle avoidance method and device thereof Active CN111142514B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911268466.9A CN111142514B (en) 2019-12-11 2019-12-11 Robot and obstacle avoidance method and device thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911268466.9A CN111142514B (en) 2019-12-11 2019-12-11 Robot and obstacle avoidance method and device thereof

Publications (2)

Publication Number Publication Date
CN111142514A CN111142514A (en) 2020-05-12
CN111142514B true CN111142514B (en) 2024-02-13

Family

ID=70518023

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911268466.9A Active CN111142514B (en) 2019-12-11 2019-12-11 Robot and obstacle avoidance method and device thereof

Country Status (1)

Country Link
CN (1) CN111142514B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022141116A1 (en) * 2020-12-29 2022-07-07 深圳市大疆创新科技有限公司 Three-dimensional point cloud segmentation method and apparatus, and movable platform
CN113034570A (en) * 2021-03-09 2021-06-25 北京字跳网络技术有限公司 Image processing method and device and electronic equipment
CN115703234B (en) * 2021-08-03 2024-01-30 北京小米移动软件有限公司 Robot control method, device, robot and storage medium
CN113917917B (en) * 2021-09-24 2023-09-15 四川启睿克科技有限公司 Obstacle avoidance method and device for indoor bionic multi-legged robot and computer readable medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008099652A1 (en) * 2007-02-13 2008-08-21 Toyota Jidosha Kabushiki Kaisha Environment map drawing method and mobile robot
CN108171715A (en) * 2017-12-05 2018-06-15 浙江大华技术股份有限公司 A kind of image partition method and device
CN109141364A (en) * 2018-08-01 2019-01-04 北京进化者机器人科技有限公司 Obstacle detection method, system and robot
CN109959377A (en) * 2017-12-25 2019-07-02 北京东方兴华科技发展有限责任公司 A kind of robot navigation's positioning system and method
CN110441791A (en) * 2019-08-14 2019-11-12 深圳无境智能机器人有限公司 A kind of ground obstacle detection method based on the 2D laser radar that leans forward

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008099652A1 (en) * 2007-02-13 2008-08-21 Toyota Jidosha Kabushiki Kaisha Environment map drawing method and mobile robot
CN108171715A (en) * 2017-12-05 2018-06-15 浙江大华技术股份有限公司 A kind of image partition method and device
CN109959377A (en) * 2017-12-25 2019-07-02 北京东方兴华科技发展有限责任公司 A kind of robot navigation's positioning system and method
CN109141364A (en) * 2018-08-01 2019-01-04 北京进化者机器人科技有限公司 Obstacle detection method, system and robot
CN110441791A (en) * 2019-08-14 2019-11-12 深圳无境智能机器人有限公司 A kind of ground obstacle detection method based on the 2D laser radar that leans forward

Also Published As

Publication number Publication date
CN111142514A (en) 2020-05-12

Similar Documents

Publication Publication Date Title
CN111142514B (en) Robot and obstacle avoidance method and device thereof
CN108319655B (en) Method and device for generating grid map
CN112785702B (en) SLAM method based on tight coupling of 2D laser radar and binocular camera
KR101708659B1 (en) Apparatus for recognizing location mobile robot using search based correlative matching and method thereof
EP2959315B1 (en) Generation of 3d models of an environment
US11184604B2 (en) Passive stereo depth sensing
WO2021052283A1 (en) Method for processing three-dimensional point cloud data and computing device
CN109977466B (en) Three-dimensional scanning viewpoint planning method and device and computer readable storage medium
KR20150144731A (en) Apparatus for recognizing location mobile robot using edge based refinement and method thereof
CN111612728B (en) 3D point cloud densification method and device based on binocular RGB image
US8682037B2 (en) Method and system for thinning a point cloud
WO2021016854A1 (en) Calibration method and device, movable platform, and storage medium
CN113111513B (en) Sensor configuration scheme determining method and device, computer equipment and storage medium
JPWO2017051480A1 (en) Image processing apparatus and image processing method
CN111415420B (en) Spatial information determining method and device and electronic equipment
CN112166457A (en) Point cloud segmentation method and system and movable platform
JP7219561B2 (en) In-vehicle environment recognition device
CN111198378A (en) Boundary-based autonomous exploration method and device
CN111369680B (en) Method and device for generating three-dimensional image of building
CN114219770A (en) Ground detection method, ground detection device, electronic equipment and storage medium
CN110706288A (en) Target detection method, device, equipment and readable storage medium
Goshin et al. Parallel implementation of the multi-view image segmentation algorithm using the Hough transform
CN115507840A (en) Grid map construction method, grid map construction device and electronic equipment
CN115511944A (en) Single-camera-based size estimation method, device, equipment and storage medium
JP7195785B2 (en) Apparatus, method and program for generating 3D shape data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant