CN115346020A - Point cloud processing method, obstacle avoidance method, device, robot and storage medium - Google Patents

Point cloud processing method, obstacle avoidance method, device, robot and storage medium Download PDF

Info

Publication number
CN115346020A
CN115346020A CN202211177209.6A CN202211177209A CN115346020A CN 115346020 A CN115346020 A CN 115346020A CN 202211177209 A CN202211177209 A CN 202211177209A CN 115346020 A CN115346020 A CN 115346020A
Authority
CN
China
Prior art keywords
point
point cloud
obstacle
sub
candidate target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211177209.6A
Other languages
Chinese (zh)
Inventor
赖志林
李聪平
李良源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Saite Intelligent Technology Co Ltd
Original Assignee
Guangzhou Saite Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Saite Intelligent Technology Co Ltd filed Critical Guangzhou Saite Intelligent Technology Co Ltd
Priority to CN202211177209.6A priority Critical patent/CN115346020A/en
Publication of CN115346020A publication Critical patent/CN115346020A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a point cloud processing method, an obstacle avoidance method, a device, a robot and a storage medium, wherein the point cloud processing method comprises the following steps: acquiring a first point cloud and a second point cloud acquired by a depth camera; registering points in the first point cloud and the second point cloud; determining sub-regions of each point in a plurality of sub-regions of a preset plane according to the three-dimensional coordinates of each point cloud, and determining a point which is closest to the depth camera in each sub-region as a candidate target point; determining matching points of candidate target points from the first point cloud; and determining the candidate target point with the distance to the matching point smaller than a preset threshold value as a target point to obtain the obstacle point cloud. The point cloud processing method of the embodiment has the advantages that the data volume of the point cloud processing is small, the extracted point cloud of the obstacle can reflect the surface characteristics of the obstacle, noise points which are useless for obstacle avoidance are filtered, the generated obstacle avoidance path is high in accuracy, and the obstacle avoidance reliability of the robot is improved.

Description

Point cloud processing method, obstacle avoidance method, device, robot and storage medium
Technical Field
The invention relates to the technical field of mobile robot control, in particular to a point cloud processing method, an obstacle avoidance method, a point cloud processing device, a robot and a storage medium.
Background
In the technical field of mobile robot control, a sensor is required to sense the surrounding environment of a path traveled by a mobile robot so as to identify an obstacle and avoid the obstacle.
Because the 2D laser radar perceives the obstacles in a certain height plane, the obstacles with short height and high height on the ground cannot be perceived, and the 3D laser radar can perceive the obstacles in an omnidirectional manner, but the price is high, so that the depth camera is widely applied to the mobile robot due to low cost.
However, after the existing method of sensing the environment by using the depth camera to obtain the point cloud, the method of processing the point cloud has the problems of large data processing amount, more noise points and incapability of reflecting the characteristics of obstacles.
Disclosure of Invention
The invention provides a point cloud processing method, an obstacle avoidance device, a robot and a storage medium, and aims to solve the problems that the existing point cloud processing method is large in data processing capacity, more in noise points and incapable of keeping obstacle characteristics.
In a first aspect, the present invention provides a point cloud processing method, including:
acquiring a first point cloud and a second point cloud acquired by a depth camera;
registering points in the first point cloud and the second point cloud;
determining sub-areas of each point in a plurality of sub-areas of a preset plane according to the three-dimensional coordinates of each point in the second point cloud, wherein the preset plane is a plane parallel to the imaging plane of the depth camera;
determining a point which is closest to the depth camera in each sub-area as a candidate target point;
determining matching points of the candidate target points from the first point cloud;
and determining the candidate target point with the distance to the matching point smaller than a preset threshold value as a target point to obtain the obstacle point cloud.
In a second aspect, the present invention provides a robot obstacle avoidance method, including:
acquiring a depth image currently acquired by a depth camera;
generating a point cloud according to the depth image;
determining an obstacle point cloud from the point cloud;
controlling a robot to avoid obstacles according to the obstacle point cloud;
wherein the obstacle point cloud is determined by the point cloud processing method of the first aspect.
In a third aspect, the present invention provides a point cloud processing apparatus, including:
the point cloud acquisition module is used for acquiring a first point cloud and a second point cloud acquired by the depth camera;
a point cloud registration module for registering points in the first point cloud and the second point cloud;
a sub-region determining module, configured to determine, according to the three-dimensional coordinates of each point in the second point cloud, a sub-region to which each point belongs in a plurality of sub-regions of a preset plane, where the preset plane is a plane parallel to an imaging plane of the depth camera;
a candidate target point determining module, configured to determine a point closest to the depth camera in each of the sub-regions as a candidate target point;
a matching point determination module for determining matching points of the candidate target points from the first point cloud;
and the obstacle point cloud determining module is used for determining the candidate target point with the distance to the matching point smaller than a preset threshold value as the target point to obtain the obstacle point cloud.
In a fourth aspect, the present invention provides an obstacle avoidance apparatus for a robot, including:
the depth image acquisition module is used for acquiring a depth image currently acquired by the depth camera;
the point cloud generating module is used for generating a point cloud according to the depth image;
the obstacle point cloud determining module is used for determining an obstacle point cloud from the point clouds;
the obstacle avoidance module is used for controlling the robot to avoid obstacles according to the obstacle point cloud;
wherein the obstacle point cloud is determined by the point cloud processing method of the first aspect.
In a fifth aspect, the present invention provides a robot comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores a computer program executable by the at least one processor, the computer program being executable by the at least one processor to enable the at least one processor to perform the point cloud processing method of the first aspect and/or the robot obstacle avoidance method of the second aspect.
In a sixth aspect, the present invention provides a computer-readable storage medium, which stores computer instructions for causing a processor to implement the point cloud processing method of the first aspect and/or the robot obstacle avoidance method of the second aspect when executed.
In this embodiment, the preset plane is a plane which includes a plurality of sub-regions and is parallel to an imaging surface of the depth camera, points in the first point cloud and the second point cloud are registered, sub-regions to which each point belongs in the plurality of sub-regions of the preset plane are determined according to three-dimensional coordinates of each point in the second point cloud, a point in each sub-region which is closest to the depth camera is determined as a candidate target point, a matching point of the candidate target point is determined from the first point cloud, and a candidate target point whose distance from the matching point is smaller than a preset threshold is determined as a target point, so that the obstacle point cloud is obtained. On one hand, the data volume of the point cloud is reduced by determining the sub-areas of the points in the point cloud in the preset plane, and the point closest to the depth camera in each sub-area is determined as the candidate target point, and the extracted points in the point cloud of the obstacle can reflect the surface characteristics of the obstacle, so that the characteristics behind the obstacle which are useless for obstacle avoidance are filtered out, on the other hand, the matching points of the candidate target point are determined at the first point cloud through point cloud registration, and the candidate target point which is less than the preset threshold value away from the matching points is used as the point cloud of the obstacle, so that the noise points are effectively filtered out.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present invention, nor do they necessarily limit the scope of the invention. Other features of the present invention will become apparent from the following description.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present invention, and it is obvious for a person skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a schematic diagram of point cloud segmentation in a conventional point cloud processing method;
fig. 2 is a flowchart of a point cloud processing method according to an embodiment of the present invention;
fig. 3A is a flowchart of a point cloud processing method according to a second embodiment of the present invention;
FIG. 3B is a schematic view of the stool;
FIG. 3C is a schematic diagram of a second point cloud;
FIG. 3D is a schematic diagram of a default plane;
FIG. 3E is a schematic illustration of a horizontal angle and a vertical angle;
FIG. 3F is a schematic diagram of a point cloud obtained by determining candidate target points;
FIG. 3G is a schematic diagram of an obstacle point cloud;
fig. 4 is a flowchart of an obstacle avoidance method for a robot according to a third embodiment of the present invention;
fig. 5 is a schematic structural diagram of a point cloud processing apparatus according to a fourth embodiment of the present invention;
fig. 6 is a schematic structural diagram of an obstacle avoidance apparatus for a robot according to a fifth embodiment of the present invention;
fig. 7 is a schematic structural diagram of a robot according to a sixth embodiment of the present invention.
Detailed Description
In order to make the technical solution of the present invention better understood, the technical solution in the embodiment of the present invention will be clearly and completely described below with reference to the drawings in the embodiment of the present invention, and it is obvious that the described embodiment is only a part of the embodiment of the present invention, and not a whole embodiment. All other embodiments, which can be obtained by a person of ordinary skill in the art without any creative effort based on the embodiments of the present invention, shall fall within the protection scope of the present invention.
In the existing point cloud processing method, a point cloud is firstly divided into a plurality of subspaces in a three-dimensional space, and then a point is randomly selected from the point cloud in each subspace to form a point cloud again for obstacle avoidance decision. Taking the stool in fig. 1 as an example of an obstacle, a cube 1 is a minimum circumscribed cube of the stool a, the cube 1 is divided into n × n subcubes 2, and the point cloud of the stool a is divided by the subcubes 2 to obtain n 2 Point cloud is identified, and then a point is randomly selected from each subcube 2 to obtain n 2 Points are used for obstacle avoidance decisions. By the method for segmenting the point cloud of the bench A, for the robot located in the direction B of the bench A, the obstacle avoidance of the robot is more concerned about the influence of the surface features of the bench A in the direction B on the obstacle avoidance, and the point cloud of the bench A is segmented by the segmentation space shown in the figure 1, point clouds of two rear vertical support legs and part of transverse support legs of the bench A are extracted, on one hand, the data volume of the extracted point clouds is large, on the other hand, the point cloud of the internal structure of the bench A is extracted to form noise points, so that the finally extracted point clouds cannot reflect the surface features of the bench A in the direction B. To solve the above problems, embodiments of the present invention provide a point cloud processing method, and more particularly, refer to the point cloud processing method in the following embodiments of the present invention.
Example one
Fig. 2 is a flowchart of a point cloud processing method according to an embodiment of the present invention, where the method is applicable to a case where an obstacle point cloud is determined in the point cloud, and the method may be executed by a point cloud processing device, where the point cloud processing device may be implemented in a form of hardware and/or software, and the point cloud processing device may be configured in a robot, for example, a controller of the robot or a background server. As shown in fig. 2, the point cloud processing method includes:
s201, acquiring a first point cloud and a second point cloud acquired by a depth camera.
In one scenario, a depth camera may be installed on the mobile robot, and a depth image may be collected by the depth camera and then converted into a point cloud, wherein a first point cloud and a second point cloud may be respectively generated according to two adjacent frames of depth images collected by the depth camera, and the second point cloud is collected after the first point cloud is collected.
S202, registering points in the first point cloud and the second point cloud.
In one embodiment, the points in the first Point Cloud and the second Point Cloud may be registered through a Point Cloud Registration algorithm to obtain a transformation matrix of coordinates between the first Point Cloud and the second Point Cloud, where the Point Cloud Registration (Point Cloud Registration) algorithm refers to inputting two Point clouds Ps (source) and Pt (target) and outputting a transformation T (i.e., rotating R and translating T), so that a coordinate error between a Point in the Point Cloud Ps and a corresponding Point in the Point Cloud Pt is as small as possible after the coordinate of the Point in the Point Cloud Ps is transformed into the Point Cloud Pt through the transformation T.
Specifically, in this embodiment, the first Point cloud and the second Point cloud may be registered through one of Point cloud registration algorithms such as ICP (Iterative Closest Point), NTD (Normal Distribution Transformation), and the like, to obtain a conversion matrix from the second Point cloud to the first Point cloud, and the coordinate of the midpoint of the second Point cloud may be converted into the coordinate of the first Point cloud in the coordinate system through the conversion matrix.
S203, determining sub-areas of each point in a plurality of sub-areas of a preset plane according to the three-dimensional coordinates of each point in the second point cloud, wherein the preset plane is a plane parallel to the imaging plane of the depth camera.
As shown in fig. 3D, the imaging plane of the depth camera may be an imaging plane of a photosensitive element in the depth camera, for example, a surface of a CCD (charge coupled device), the preset plane is parallel to the imaging plane and has a rectangular outer contour, and the side length of the preset plane may be obtained by scaling the side lengths of the photosensitive element in an equal ratio.
In this embodiment, the center O of the preset plane is flush with the optical center k of the depth camera, m × n rectangular sub-regions are divided on the preset plane, and the sub-region to which each point belongs can be determined according to the three-dimensional coordinates of each point in the second point cloud, where the origin of coordinates of the three-dimensional coordinates of each point in the second point cloud is the center O of the preset plane.
In an optional embodiment, a horizontal angle range and a vertical angle range can be set for each sub-region according to the position of the sub-region, then the horizontal angle and the vertical angle of each point are calculated through the three-dimensional coordinates of each point in the second point cloud, and the sub-region to which each point belongs is determined through the horizontal angle range to which the horizontal angle belongs and the vertical angle range to which the vertical angle belongs.
In another optional embodiment, each point in the second point cloud may be vertically projected onto a preset plane to obtain a sub-region to which each point belongs, a horizontal coordinate range and a vertical coordinate range may be set for each sub-region, and the sub-region to which each point belongs is determined according to the horizontal coordinate range to which the horizontal coordinate of each point belongs and the vertical coordinate range to which the vertical coordinate belongs. It should be noted that each sub-region includes at least one point in the second point cloud.
And S204, determining a point which is closest to the depth camera in each sub-area as a candidate target point.
Since each sub-area may include a plurality of points, and the three-dimensional coordinates of each point include a depth value from the point to the depth camera, a point with a minimum depth value may be determined from the points of each sub-area as a point closest to the camera in the sub-area, resulting in a candidate target point. Because the point on the object closest to the depth camera is the point on the surface of the object, the features of the surface of the object can be extracted through the extracted candidate target points, so that points which are in the object and are useless for avoiding obstacles can be filtered, the data volume needing to be processed subsequently is reduced, and the surface features of the object can be extracted.
S205, determining matching points of the candidate target points from the first point cloud.
For the obstacle, each point of the obstacle in the second point cloud is a corresponding matching point in the first point cloud, and in practical application, the first point cloud and the second point cloud can be registered through a point cloud registration algorithm, for example, after the first point cloud and the second point cloud are registered in S202, the matching point of the point in the second point cloud in the first point cloud is obtained, and the matching point of the candidate target point in the first point cloud can be determined in the registration result of S202 because the candidate target point is a point in the second point cloud.
And S206, determining the candidate target point with the distance to the matching point smaller than a preset threshold value as a target point to obtain the obstacle point cloud.
After registration in S202, a transformation matrix may be obtained, where the transformation matrix is a coordinate of a point in the second point cloud transformed into a coordinate of the first point cloud in the coordinate system, the coordinate of the candidate target point may be transformed into a coordinate of the first point cloud in the coordinate system by the transformation matrix, then the distance between the candidate target point and the matching point is calculated by using the coordinate of the candidate target point in the coordinate system of the first point cloud and the coordinate of the matching point, if the distance is smaller than a preset threshold, the candidate target point is a point on the surface of the object, and if the distance is greater than or equal to the preset threshold, the candidate target point is a noise point, for example, a noise point formed when a tiny object such as dust, plant leaves, etc. passes through the depth camera, is shot by the depth camera, so that the noise point may be filtered out, and the accuracy of the extracted obstacle point cloud may be improved.
In this embodiment, the preset plane is a plane which includes a plurality of sub-regions and is parallel to an imaging surface of the depth camera, points in the first point cloud and the second point cloud collected by the depth camera are registered, the sub-regions to which each point belongs in the plurality of sub-regions of the preset plane are determined according to three-dimensional coordinates of each point in the second point cloud, a point in each sub-region which is closest to the depth camera is determined as a candidate target point, a matching point of the candidate target point is determined from the first point cloud, and a candidate target point whose distance from the matching point is smaller than a preset threshold is determined as a target point, so that an obstacle point cloud is obtained. On one hand, the data volume of the processed point cloud is reduced by determining the sub-areas of the points in the point cloud in the preset plane, and the point closest to the depth camera in each sub-area is determined as the candidate target point, the extracted points in the point cloud of the obstacle can reflect the surface characteristics of the obstacle, so that the point cloud with characteristics behind or inside the obstacle which are useless for obstacle avoidance is filtered out, on the other hand, the matching points of the candidate target point can be determined in the first point cloud through point cloud registration, and the candidate target point with the distance from the matching points smaller than the preset threshold value is used as the point cloud of the obstacle, so that noise points are effectively filtered out, the extracted obstacle point cloud is more accurate, and the accuracy of obstacle avoidance of the robot is improved.
Example two
Fig. 3A is a flowchart of a point cloud processing method according to a second embodiment of the present invention, which is optimized based on the first embodiment of the present invention, and as shown in fig. 3A, the point cloud processing method includes:
s301, acquiring a first original point cloud and a second original point cloud acquired by the depth camera, wherein the first original point cloud and the second original point cloud are point clouds under a coordinate system of the depth camera.
The mobile robot may be equipped with a depth camera, and a first depth image and a second depth image may be obtained by shooting an environment in front of the mobile robot through the depth camera, where the second depth image is an image collected after the first depth image is collected, for example, the depth camera collects depth images by exposure according to a preset frequency, the first depth image and the second depth image are two adjacent frames of images, and the second depth image is a subsequent image.
After the depth image is acquired, each pixel point on the depth image comprises the three-dimensional coordinates of a point on an object in a real scene, so that a point cloud can be generated according to the first depth image to obtain a first original point cloud, and a point cloud is generated according to the second depth image to obtain a second original point cloud, wherein the first original point cloud and the second original point cloud are point clouds under the coordinate system of the depth camera.
S302, converting the first original point cloud and the second original point cloud into point clouds under a world coordinate system, and removing the point clouds on the ground to obtain the first point cloud and the second point cloud.
Specifically, the first and second original point clouds may be converted into point clouds under a world coordinate system by a rotation matrix T of the coordinate system of the depth camera to the world coordinate system, where the rotation matrix T of the coordinate system of the depth camera to the world coordinate system is as follows:
Figure BDA0003865048880000091
in the above formula, [ theta ] is x 、θ y 、θ z The method comprises the steps that the depth camera rotates around the x, y and z angles respectively, the first original point cloud and the second original point cloud are converted into point clouds under a world coordinate system through a rotation matrix T, the ground point cloud can be removed, and then the first point cloud and the second point cloud are obtained, wherein the ground point cloud can be obtained by referring to any one of the existing point cloud ground segmentation methods after the ground point cloud is removed, and detailed description is omitted.
Taking the stool shown in fig. 3B as an example, the resulting point cloud is shown in fig. 3C, where area a in fig. 3C is the point cloud for the stool in fig. 3B.
And S303, registering the points in the first point cloud and the second point cloud.
In this embodiment, the first point cloud and the second point cloud may be registered through one of point cloud registration algorithms such as ICP (Iterative candidate point), NTD (Normal Distribution Transformation), and the like, to obtain a Transformation matrix T 'from the second point cloud to the first point cloud, and the coordinate of the midpoint of the second point cloud may be transformed into the coordinate of the first point cloud in the coordinate system through the Transformation matrix T'.
S304, calculating the horizontal angle and the vertical angle of each point by adopting the three-dimensional coordinates of each point in the second point cloud.
As shown in fig. 3D, the imaging plane of the depth camera may be an imaging plane of a photosensitive element in the depth camera, for example, a surface of a CCD (charge coupled device), the preset plane is parallel to the imaging plane and has a rectangular outer contour, and the side length of the preset plane may be obtained by scaling the side lengths of the photosensitive element in an equal ratio.
In this embodiment, the center O of the preset plane is flush with the optical center k of the depth camera, m × n rectangular sub-regions are divided on the preset plane, and the sub-region to which each point belongs can be determined according to the three-dimensional coordinates of each point in the second point cloud, where the origin of coordinates of the three-dimensional coordinates of each point in the second point cloud is the center O of the preset plane.
As shown in fig. 3D, four vertexes a, b, c, D of the preset plane are respectively connected with the optical center k, the length of the line segment ak is equal to the length of the line segment bk, the length of the line segment ab can be calculated through the three-dimensional coordinates of the vertex a and the vertex b, and therefore the included angle ≦ akb between the line segment ak and the line segment bk can be calculated.
As shown in the origin O of the coordinate system in fig. 3D, for a horizontal angle, the positive direction of the coordinate axis y is a positive angle, and the negative direction of the coordinate axis y is a negative angle, the angular range of the jth sub-region of the positive direction of the coordinate axis y is [ (j-1) × i, j × i ], where i is the size of the horizontal angular range, and the angular range of the ith sub-region of the negative direction of the coordinate axis y is [ - (j-1) × i, -j × i ], where i is the size of the horizontal angular range, and similarly, for a vertical angle, the positive direction of the coordinate axis z is a positive angle, and the negative direction of the coordinate axis z is a negative angle, the angular range of the jth sub-region of the positive direction of the coordinate axis z is [ (j-1) × s, j × s ], where s is the size of the vertical angular range, and the angular range of the jth sub-region of the negative direction of the coordinate axis z is [ - (j-1) × s, -j × s ], where s is the size of the vertical angular range.
Certainly, in another optional embodiment, three-dimensional coordinates of four vertexes of each sub-region may also be obtained, after horizontal distances and vertical distances between the four vertexes and a coordinate axis z and a coordinate axis y of the preset plane are respectively calculated through the three-dimensional coordinates of the four vertexes of the sub-region, horizontal angle ranges and vertical angle ranges of the sub-region are calculated through the horizontal distances and the vertical distances of the four vertexes, and the manner of calculating the horizontal angle ranges and the vertical angle ranges of the sub-region is not limited in this embodiment.
In an optional embodiment, the horizontal angle of each point in the second point cloud is an included angle between a first straight line and a second straight line, the vertical angle is an included angle between the first straight line and a third straight line, the first straight line is a connection line between each point and the center of the depth camera, the second straight line is a connection line between a projection point of each point on a vertical central line of the preset plane and the center of the depth camera, and the third straight line is a connection line between a projection point of each point on a horizontal central line of the preset plane and the center of the depth camera.
Specifically, as shown in fig. 3E, for each point P (y, z, x) in the second point cloud, a projection point of the point P on the z-axis of the vertical center line of the preset plane is a point P1, a projection point on the y-axis of the horizontal center line of the preset plane is a point P2, and a horizontal angle of the point P is an included angle θ between the straight line kP and the straight line kP1 HFOV The vertical angle of the point P is an included angle theta between the straight line kP and the straight line kP2 VFOV
In an alternative embodiment, an inverse tangent function value of an absolute value of a ratio of the horizontal coordinate value to the depth value of each point in the second point cloud may be calculated to obtain a horizontal angle of each point, and an inverse tangent function value of an absolute value of a ratio of the vertical coordinate value to the depth value of each point in the second point cloud may be calculated to obtain a vertical angle of each point, specifically the following formula:
Figure BDA0003865048880000111
Figure BDA0003865048880000112
wherein y is a horizontal coordinate value of the point P, x is a depth value of the point P, z is a vertical coordinate value of the point P, and arctan () is an arctan function.
S305, determining a sub-area of the preset plane, wherein the horizontal angle range comprises a horizontal angle, and the sub-area of the vertical angle range comprises a vertical angle, as the sub-area to which each point belongs.
Each sub-area in the preset plane of the embodiment is preset with a horizontal angle range and a vertical angle range, and the horizontal angle theta of each point in the second point cloud is calculated HFOV And vertical angle theta VFOV Then, it may be determined that the horizontal angle range includes the horizontal angle θ HFOV Then contains the horizontal angle theta from the horizontal angle range HFOV Determine that the range of vertical angles includes the vertical angle theta VFOV As the sub-region to which the point P belongs.
S306, acquiring the depth value of the point contained in the sub-area.
Since each sub-region may include a plurality of points, and the three-dimensional coordinates of each point include a depth value from the point to the depth camera, the depth value of each point may be read from the three-dimensional coordinates of the point of each sub-region, and the x-coordinate value of each point in fig. 3E is the depth value.
And S307, determining the point with the minimum depth value as the point which is closest to the depth camera in the sub-area to serve as a candidate target point.
Specifically, among the points of each sub-area, the point with the minimum x-coordinate value is determined as the point with the minimum depth value, and the point is used as the point closest to the depth camera in the sub-area, so that the candidate target point is obtained. Because the point closest to the depth camera on the object is the point on the surface of the object, the characteristic of the surface of the object can be extracted by extracting the closest point of each sub-area as a candidate target point, so that the points which are in the object and are useless for avoiding obstacles can be filtered, the data volume needing to be processed subsequently is reduced, and the surface characteristic of the object can be extracted.
As shown in fig. 3F, which is a schematic diagram of the point cloud after the candidate target point is retained in each sub-region, compared to the point cloud in fig. 3C, the number of the points in the point cloud is significantly reduced, and the region a retains the point on the outer surface of the stool closest to the depth camera, and filters out the lateral support inside the stool and the support facing away from the rear of the depth camera.
And S308, converting the three-dimensional coordinates of the candidate target point into coordinates of the first point cloud under the coordinate system by adopting the conversion matrix to obtain the three-dimensional coordinates of the candidate target point under the coordinate system of the first point cloud.
In S303, the first point cloud and the second point cloud are registered to obtain a transformation matrix T 'from the second point cloud to the first point cloud, and the three-dimensional coordinates of the candidate target point can be transformed into coordinates of the first point cloud in the coordinate system through the transformation matrix T', so as to obtain the three-dimensional coordinates of the candidate target point in the coordinate system of the first point cloud.
S309, calculating the distance between the candidate target point and the matching point according to the three-dimensional coordinates of the candidate target point in the coordinate system of the first point cloud and the three-dimensional coordinates of the matching point.
The first point cloud and the second point cloud can be two adjacent frames of point clouds, when the first point cloud and the second point cloud are matched, points in the second point cloud have corresponding matching points in the first point cloud, and the distance between the candidate target point and the matching points can be calculated through the three-dimensional coordinates of the first candidate target point in the coordinate system of the first point cloud and the three-dimensional coordinates of the matching points.
S310, determining the candidate target point with the distance from the matching point to the preset threshold as a target point to obtain the obstacle point cloud.
If the distance between the candidate target point and the matching point is smaller than the preset threshold, the candidate target point is a point on the object surface, the candidate target point can be reserved in the second point cloud as the target point, and if the distance is larger than or equal to the preset threshold, the candidate target point is a noise point, for example, a noise point formed when tiny objects such as dust, plant leaves and the like pass through the depth camera is shot by the depth camera, so that the noise point can be filtered, and the accuracy of the extracted obstacle point cloud is improved.
Fig. 3G shows the finally extracted point cloud of the stool in fig. 3B, which includes the point cloud of the top surface of the stool, the point cloud of the two legs facing the depth camera, and the point cloud of the lateral legs.
In the embodiment, after the first and second original point clouds collected by the depth camera are converted into the first and second point clouds under the world coordinate system, registering the points in the first point cloud and the second point cloud, calculating the horizontal angle and the vertical angle of each point by adopting the three-dimensional coordinates of each point in the second point cloud, determining a sub-area of the preset plane, in which a horizontal angle range includes a horizontal angle and a vertical angle range includes a vertical angle, as a sub-area to which each point belongs, and taking a point with a minimum depth value in each sub-area as a candidate target point, calculating the coordinate of the candidate target point under the coordinate system of the first point cloud through the transformation matrix, calculating the three-dimensional coordinate of the candidate target point under the coordinate system of the first point cloud and the three-dimensional coordinate of the matching point, calculating the distance between the candidate target point and the matching point, determining the candidate target point with the distance between the candidate target point and the matching point being less than a preset threshold value as the target point to obtain the obstacle point cloud, by determining the sub-areas of the points in the point cloud in the preset plane and taking the point closest to the depth camera in each sub-area as a candidate target point, the data volume of the processed point cloud is reduced, and by reserving the candidate target points, the extracted points in the obstacle point cloud can reflect the surface characteristics of the obstacles, thereby filtering out the point cloud of the characteristics behind or inside the obstacle which is useless for avoiding the obstacle, on the other hand, the matching points of the candidate target points can be determined in the first point cloud through the conversion matrix, the candidate target points with the distance smaller than a preset threshold value from the candidate target points are determined to be used as the obstacle point cloud, noise points are effectively filtered, the extracted obstacle point cloud is more accurate, and the obstacle avoidance accuracy of the robot is improved.
EXAMPLE III
Fig. 4 is a flowchart of an obstacle avoidance method for a robot according to a third embodiment of the present invention, where this embodiment is applicable to a case where an obstacle is determined by a point cloud to control a robot to avoid an obstacle, and the method may be executed by a robot obstacle avoidance apparatus, where the robot obstacle avoidance apparatus may be implemented in hardware and/or software, and the robot obstacle avoidance apparatus may be configured in a robot, for example, a controller of the robot or a backend server. As shown in fig. 4, the robot obstacle avoidance method includes:
s401, obtaining a depth image currently collected by a depth camera.
In this embodiment, the mobile robot may be provided with a depth camera, and the depth image may be collected by the depth camera, for example, the depth camera may be exposed and collected according to a preset frequency.
S402, generating a point cloud according to the depth image.
Specifically, after the depth image is acquired, each pixel point on the depth image includes the three-dimensional coordinates of a point on an object in a real scene, so that a point cloud can be generated according to the depth image, wherein the point cloud is a point cloud corresponding to the depth image currently acquired by the depth camera.
And S403, determining the point cloud of the obstacle from the point cloud.
Optionally, the point cloud of the obstacle may be determined by the point cloud processing method of the first embodiment or the second embodiment, specifically, the point cloud corresponding to the current frame depth image is a second point cloud, then the first point cloud is obtained, the first point cloud may be a point cloud corresponding to a previous frame depth image of the current frame depth image, then points in the first point cloud and the second point cloud are registered, sub-regions to which each point belongs in a plurality of sub-regions of a preset plane are determined according to three-dimensional coordinates of each point in the second point cloud, and the preset plane is a plane using a center line of the depth camera as a normal line; determining a point which is closest to the depth camera in each sub-area as a candidate target point; the method includes determining a matching point of a candidate target point from the first point cloud, and determining a candidate target point whose distance from the matching point is smaller than a preset threshold as a target point to obtain an obstacle point cloud.
And S404, controlling the robot to avoid the obstacle according to the obstacle point cloud.
In an optional embodiment, the point cloud of the obstacle can be projected to a map to obtain the position of the obstacle in the map, an obstacle avoidance path is generated according to the position of the obstacle in the map, and the robot is controlled to move according to the obstacle avoidance path. The method for generating the obstacle avoidance path may refer to the prior art, and is not described in detail here.
In the obstacle avoidance method for the robot of this embodiment, after a depth image currently acquired by a depth camera is acquired and a point cloud is generated, an obstacle point cloud can be determined by the point cloud processing method of the first embodiment or the second embodiment to control obstacle avoidance of the robot, in the point cloud processing method, on one hand, a sub-region where a point in the point cloud belongs in a preset plane is determined, a point closest to the depth camera in each sub-region is determined as a candidate target point, data volume of the processed point cloud is reduced, and by reserving the candidate target point, the extracted point in the obstacle point cloud can reflect surface features of the obstacle, so that features behind the obstacle which are useless for obstacle avoidance are filtered out, efficiency of generating an obstacle avoidance path is improved, the obstacle avoidance of the robot can be controlled in time, on the other hand, a matching point of the candidate target point can be determined at the first point cloud through registration, the candidate target point which is less than a preset threshold from the matching point is used as the obstacle point cloud, noise points are effectively filtered out, influence of noise is avoided, the generated obstacle avoidance path is high in accuracy, and reliability of the obstacle avoidance path of the robot is improved.
Example four
Fig. 5 is a schematic structural diagram of a point cloud processing apparatus according to a fourth embodiment of the present invention. As shown in fig. 5, the point cloud processing apparatus includes:
the point cloud obtaining module 501 is configured to obtain a first point cloud and a second point cloud collected by a depth camera;
a point cloud registration module 502 for registering points in the first point cloud and the second point cloud;
a sub-region determining module 503, configured to determine, according to the three-dimensional coordinates of each point in the second point cloud, a sub-region to which each point belongs in a plurality of sub-regions of a preset plane, where the preset plane is a plane parallel to an imaging plane of the depth camera;
a candidate target point determining module 504, configured to determine, as a candidate target point, a point in each of the sub-regions that is closest to the depth camera;
a matching point determining module 505, configured to determine a matching point of the candidate target point from the first point cloud;
an obstacle point cloud determining module 506, configured to determine a candidate target point whose distance from the matching point is smaller than a preset threshold as a target point, so as to obtain an obstacle point cloud.
Optionally, the point cloud obtaining module 501 includes:
the system comprises an original point cloud obtaining unit, a depth camera and a control unit, wherein the original point cloud obtaining unit is used for obtaining a first original point cloud and a second original point cloud which are collected by the depth camera, and the first original point cloud and the second original point cloud are point clouds under a coordinate system of the depth camera;
and the point cloud conversion unit is used for converting the first original point cloud and the second original point cloud into point clouds under a world coordinate system and removing the point clouds on the ground to obtain the first point cloud and the second point cloud.
Optionally, each sub-region in the preset plane is provided with a horizontal angle range and a vertical angle range, and the sub-region determining module 503 includes:
the angle calculation unit is used for calculating the horizontal angle and the vertical angle of each point by adopting the three-dimensional coordinates of each point in the second point cloud;
the sub-area determining unit is used for determining a sub-area of which the horizontal angle range comprises the horizontal angle and a sub-area of which the vertical angle range comprises the vertical angle in a plurality of sub-areas of a preset plane as a sub-area to which each point belongs;
the horizontal angle is an included angle between a first straight line and a second straight line, the vertical angle is an included angle between the first straight line and a third straight line, the first straight line is a connecting line between each point and the center of the depth camera, the second straight line is a connecting line between a projection point of each point on a vertical central line of the preset plane and the center of the depth camera, and the third straight line is a connecting line between a projection point of each point on a horizontal central line of the preset plane and the center of the depth camera.
Optionally, the angle calculation unit includes:
the horizontal angle calculation subunit is used for calculating an arc tangent function value of an absolute value of a ratio of a horizontal direction coordinate value and a depth value of each point in the second point cloud to obtain a horizontal angle of each point;
and the vertical angle calculating subunit is used for calculating an arc tangent function value of an absolute value of a ratio of a vertical direction coordinate value to a depth value of each point in the second point cloud to obtain the vertical angle of each point.
Optionally, the candidate target point determining module 504 includes:
a depth value obtaining unit, configured to obtain a depth value of a point included in the sub-region;
a candidate target point determining unit configured to determine a point of the sub-area having a smallest depth value as a point of the sub-area closest to the depth camera as a candidate target point.
Optionally, the obstacle point cloud determination module 506 comprises:
the coordinate conversion unit is used for converting the three-dimensional coordinates of the candidate target point into coordinates of the first point cloud under the coordinate system by adopting a conversion matrix to obtain the three-dimensional coordinates of the candidate target point under the coordinate system of the first point cloud, wherein the conversion matrix is obtained by registering points in the first point cloud and the second point cloud;
the distance calculation unit is used for calculating the distance between the candidate target point and the matching point through the three-dimensional coordinates of the candidate target point under the coordinate system of the first point cloud and the three-dimensional coordinates of the matching point;
and the obstacle point cloud determining unit is used for determining the candidate target point with the distance to the matching point smaller than a preset threshold value as the target point to obtain the obstacle point cloud.
The point cloud processing device provided by the embodiment of the invention can execute the point cloud processing method provided by the first embodiment and the second embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
EXAMPLE five
Fig. 6 is a schematic structural diagram of an obstacle avoidance device for a robot according to a fifth embodiment of the present invention. As shown in fig. 6, the robot obstacle avoidance device includes:
the depth image acquisition module 601 is configured to acquire a depth image currently acquired by a depth camera;
a point cloud generating module 602, configured to generate a point cloud according to the depth image;
an obstacle point cloud determining module 603, configured to determine an obstacle point cloud from the point clouds;
the obstacle avoidance module 604 is used for controlling the robot to avoid obstacles according to the obstacle point cloud;
the obstacle point cloud is determined by the point cloud processing method described in the first embodiment or the second embodiment.
Optionally, the obstacle avoidance module 604 includes:
the projection unit is used for projecting the obstacle point cloud into a map to obtain the position of the obstacle in the map;
the obstacle avoidance path generating unit is used for generating an obstacle avoidance path according to the position of the obstacle in the map;
and the robot control unit is used for controlling the robot to move according to the obstacle avoidance path.
The robot obstacle avoidance device provided by the embodiment of the invention can execute the robot obstacle avoidance method provided by the third embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
EXAMPLE six
Fig. 7 shows a schematic structural diagram of a robot 70 that may be used to implement an embodiment of the invention. Robot 70 is intended to represent devices containing various forms of digital computers, such as devices containing laptop computers, desktop computers, workstations, personal digital assistants, servers, blade servers, mainframe computers, and other suitable computers.
As shown in fig. 7, the robot 70 includes at least one processor 71, and a memory communicatively connected to the at least one processor 71, such as a Read Only Memory (ROM) 72, a Random Access Memory (RAM) 73, and the like, wherein the memory stores a computer program executable by the at least one processor, and the processor 71 may perform various appropriate actions and processes according to the computer program stored in the Read Only Memory (ROM) 72 or the computer program loaded from the storage unit 78 into the Random Access Memory (RAM) 73. In the RAM 73, various programs and data necessary for the operation of the robot 70 can also be stored. The processor 71, the ROM 72, and the RAM 73 are connected to each other by a bus 74. An input/output (I/O) interface 75 is also connected to bus 74.
Various components in robot 70 are connected to I/O interface 75, including: an input unit 76 such as a keyboard, a mouse, a depth camera, and the like; an output unit 77 such as various types of displays, speakers, and the like; a storage unit 78, such as a magnetic disk, optical disk, or the like; and a communication unit 79 such as a network card, modem, wireless communication transceiver, etc. The communication unit 79 allows the robot 70 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
Processor 71 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 71 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The processor 71 performs the various methods and processes described above, such as point cloud processing methods, and/or robotic obstacle avoidance methods.
In some embodiments, the point cloud processing method, and/or the robot obstacle avoidance method, may be implemented as a computer program tangibly embodied in a computer-readable storage medium, such as the storage unit 78. In some embodiments, part or all of the computer program may be loaded and/or installed onto the robot 70 via the ROM 72 and/or the communication unit 79. When the computer program is loaded into RAM 73 and executed by processor 71, one or more steps of the point cloud processing method described above, and/or the robot obstacle avoidance method, may be performed. Alternatively, in other embodiments, the processor 71 may be configured to perform a point cloud processing method, and/or a robotic obstacle avoidance method, by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic current systems, integrated current systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Current (ASICs), application Specific Standard Products (ASSPs), system On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for implementing the methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be performed. A computer program can execute entirely on a machine, partly on a machine, as a stand-alone software package partly on a machine and partly on a remote machine or entirely on a remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. A computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described herein may be implemented on a robot 70, the robot 70 having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user may provide input to the robot 70. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input.
It should be understood that various forms of the flows shown above, reordering, adding or deleting steps, may be used. For example, the steps described in the present invention may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired results of the technical solution of the present invention can be achieved.
The above-described embodiments should not be construed as limiting the scope of the invention. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations, and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (12)

1. A point cloud processing method, comprising:
acquiring a first point cloud and a second point cloud acquired by a depth camera;
registering points in the first point cloud and the second point cloud;
determining sub-areas of each point in a plurality of sub-areas of a preset plane according to the three-dimensional coordinates of each point in the second point cloud, wherein the preset plane is a plane parallel to the imaging plane of the depth camera;
determining a point which is closest to the depth camera in each sub-area as a candidate target point;
determining matching points of the candidate target points from the first point cloud;
and determining the candidate target point with the distance to the matching point smaller than a preset threshold value as a target point to obtain the obstacle point cloud.
2. The point cloud processing method of claim 1, wherein the obtaining the first point cloud and the second point cloud collected by the depth camera comprises:
acquiring a first original point cloud and a second original point cloud acquired by a depth camera, wherein the first original point cloud and the second original point cloud are point clouds under a coordinate system of the depth camera;
and converting the first original point cloud and the second original point cloud into point clouds under a world coordinate system, and removing the point clouds on the ground to obtain the first point cloud and the second point cloud.
3. The point cloud processing method of claim 1, wherein each sub-region in the predetermined plane is provided with a horizontal angle range and a vertical angle range, and the determining the sub-region of each point in the plurality of sub-regions in the predetermined plane according to the three-dimensional coordinates of each point in the second point cloud comprises:
calculating the horizontal angle and the vertical angle of each point by adopting the three-dimensional coordinates of each point in the second point cloud;
determining a sub-area of a preset plane, wherein the horizontal angle range comprises the horizontal angle, and the sub-area of the vertical angle range comprises the vertical angle, as a sub-area to which each point belongs;
the horizontal angle is an included angle between a first straight line and a second straight line, the vertical angle is an included angle between the first straight line and a third straight line, the first straight line is a connecting line of each point and the center of the depth camera, the second straight line is a connecting line of a projection point of each point on a vertical central line of the preset plane and the center of the depth camera, and the third straight line is a connecting line of a projection point of each point on a horizontal central line of the preset plane and the center of the depth camera.
4. The point cloud processing method of claim 3, wherein said calculating horizontal and vertical angles of each point using three-dimensional coordinates of each point in the second point cloud comprises:
calculating an arc tangent function value of an absolute value of a ratio of a horizontal coordinate value and a depth value of each point in the second point cloud to obtain a horizontal angle of each point;
and calculating an arc tangent function value of the absolute value of the ratio of the vertical coordinate value to the depth value of each point in the second point cloud to obtain the vertical angle of each point.
5. The point cloud processing method of claim 1, wherein said determining a point in each of the sub-regions that is closest to the depth camera as a candidate target point comprises:
acquiring depth values of points contained in the sub-area;
and determining the point with the minimum depth value in the sub-area as the point which is closest to the depth camera in the sub-area to serve as a candidate target point.
6. The point cloud processing method of any one of claims 1-5, wherein determining candidate target points having a distance to the matching point less than a preset threshold as target points results in an obstacle point cloud comprising:
converting the three-dimensional coordinates of the candidate target point into coordinates of the candidate target point under the coordinate system of the first point cloud by adopting a conversion matrix, and obtaining the three-dimensional coordinates of the candidate target point under the coordinate system of the first point cloud, wherein the conversion matrix is obtained by registering points in the first point cloud and the second point cloud;
calculating the distance between the candidate target point and the matching point according to the three-dimensional coordinates of the candidate target point in the coordinate system of the first point cloud and the three-dimensional coordinates of the matching point;
and determining the candidate target point with the distance to the matching point smaller than a preset threshold value as a target point to obtain the obstacle point cloud.
7. A robot obstacle avoidance method is characterized by comprising the following steps:
acquiring a depth image currently acquired by a depth camera;
generating a point cloud according to the depth image;
determining an obstacle point cloud from the point cloud;
controlling a robot to avoid obstacles according to the obstacle point cloud;
wherein the obstacle point cloud is determined by the point cloud processing method of any one of claims 1-6.
8. The robot obstacle avoidance method of claim 7, wherein the controlling of robot obstacle avoidance according to the obstacle point cloud comprises:
projecting the obstacle point cloud into a map to obtain the position of the obstacle in the map;
generating an obstacle avoidance path according to the position of the obstacle in the map;
and controlling the robot to move according to the obstacle avoidance path.
9. A point cloud processing apparatus, comprising:
the point cloud acquisition module is used for acquiring a first point cloud and a second point cloud acquired by the depth camera;
a point cloud registration module for registering points in the first point cloud and the second point cloud;
a sub-region determining module, configured to determine, according to the three-dimensional coordinates of each point in the second point cloud, a sub-region to which each point belongs in a plurality of sub-regions of a preset plane, where the preset plane is a plane parallel to an imaging plane of the depth camera;
a candidate target point determining module, configured to determine, as a candidate target point, a point in each of the sub-regions that is closest to the depth camera;
a matching point determination module for determining matching points of the candidate target points from the first point cloud;
and the obstacle point cloud determining module is used for determining the candidate target point with the distance from the matching point to the preset threshold as the target point to obtain the obstacle point cloud.
10. The utility model provides a barrier device is kept away to robot which characterized in that includes:
the depth image acquisition module is used for acquiring a depth image currently acquired by the depth camera;
the point cloud generating module is used for generating a point cloud according to the depth image;
the obstacle point cloud determining module is used for determining an obstacle point cloud from the point clouds;
the obstacle avoidance module is used for controlling the robot to avoid obstacles according to the obstacle point cloud;
wherein the obstacle point cloud is determined by the point cloud processing method of any one of claims 1-6.
11. A robot, characterized in that the robot comprises:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the point cloud processing method of any one of claims 1-6 and/or the robot obstacle avoidance method of any one of claims 7-8.
12. A computer-readable storage medium, characterized in that the computer-readable storage medium stores computer instructions for causing a processor to implement the point cloud processing method of any one of claims 1-6, and/or the robot obstacle avoidance method of any one of claims 7-8 when executed.
CN202211177209.6A 2022-09-26 2022-09-26 Point cloud processing method, obstacle avoidance method, device, robot and storage medium Pending CN115346020A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211177209.6A CN115346020A (en) 2022-09-26 2022-09-26 Point cloud processing method, obstacle avoidance method, device, robot and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211177209.6A CN115346020A (en) 2022-09-26 2022-09-26 Point cloud processing method, obstacle avoidance method, device, robot and storage medium

Publications (1)

Publication Number Publication Date
CN115346020A true CN115346020A (en) 2022-11-15

Family

ID=83955476

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211177209.6A Pending CN115346020A (en) 2022-09-26 2022-09-26 Point cloud processing method, obstacle avoidance method, device, robot and storage medium

Country Status (1)

Country Link
CN (1) CN115346020A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116931583A (en) * 2023-09-19 2023-10-24 深圳市普渡科技有限公司 Method, device, equipment and storage medium for determining and avoiding moving object

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116931583A (en) * 2023-09-19 2023-10-24 深圳市普渡科技有限公司 Method, device, equipment and storage medium for determining and avoiding moving object
CN116931583B (en) * 2023-09-19 2023-12-19 深圳市普渡科技有限公司 Method, device, equipment and storage medium for determining and avoiding moving object

Similar Documents

Publication Publication Date Title
CN112771573B (en) Depth estimation method and device based on speckle images and face recognition system
JP6830139B2 (en) 3D data generation method, 3D data generation device, computer equipment and computer readable storage medium
WO2021052283A1 (en) Method for processing three-dimensional point cloud data and computing device
WO2022227489A1 (en) Collision detection method and apparatus for objects, and device and storage medium
JP7228623B2 (en) Obstacle detection method, device, equipment, storage medium, and program
CN115719436A (en) Model training method, target detection method, device, equipment and storage medium
CN112509126A (en) Method, device, equipment and storage medium for detecting three-dimensional object
CN115578433A (en) Image processing method, image processing device, electronic equipment and storage medium
CN110530375B (en) Robot adaptive positioning method, positioning device, robot and storage medium
JP7351892B2 (en) Obstacle detection method, electronic equipment, roadside equipment, and cloud control platform
CN115346020A (en) Point cloud processing method, obstacle avoidance method, device, robot and storage medium
US11651533B2 (en) Method and apparatus for generating a floor plan
CN110542421A (en) Robot positioning method, positioning device, robot, and storage medium
CN114299242A (en) Method, device and equipment for processing images in high-precision map and storage medium
CN113538555B (en) Volume measurement method, system, equipment and storage medium based on rule box
CN110530376B (en) Robot positioning method, device, robot and storage medium
US20230169680A1 (en) Beijing baidu netcom science technology co., ltd.
CN114723894B (en) Three-dimensional coordinate acquisition method and device and electronic equipment
CN115147809B (en) Obstacle detection method, device, equipment and storage medium
CN115790621A (en) High-precision map updating method and device and electronic equipment
CN113554882A (en) Method, apparatus, device and storage medium for outputting information
CN112465692A (en) Image processing method, device, equipment and storage medium
EP4024348A2 (en) Method and device for determining boundary points of bottom surface of vehicle, roadside device and cloud control platform
CN114742897B (en) Method, device and equipment for processing camera installation information of roadside sensing system
CN113312979B (en) Image processing method and device, electronic equipment, road side equipment and cloud control platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination