CN115100229A - Obstacle identification method and device based on depth point cloud data and robot - Google Patents

Obstacle identification method and device based on depth point cloud data and robot Download PDF

Info

Publication number
CN115100229A
CN115100229A CN202210680286.7A CN202210680286A CN115100229A CN 115100229 A CN115100229 A CN 115100229A CN 202210680286 A CN202210680286 A CN 202210680286A CN 115100229 A CN115100229 A CN 115100229A
Authority
CN
China
Prior art keywords
ground
point cloud
obstacle
point
cloud data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210680286.7A
Other languages
Chinese (zh)
Inventor
林伟加
柏林
刘彪
舒海燕
沈创芸
祝涛剑
王恒华
方映峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Gosuncn Robot Co Ltd
Original Assignee
Guangzhou Gosuncn Robot Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Gosuncn Robot Co Ltd filed Critical Guangzhou Gosuncn Robot Co Ltd
Priority to CN202210680286.7A priority Critical patent/CN115100229A/en
Publication of CN115100229A publication Critical patent/CN115100229A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20164Salient point detection; Corner detection

Abstract

The invention provides a method for identifying obstacles based on depth point cloud data, which comprises the following steps: s1, acquiring depth point cloud data of the structured light camera; s2, converting the depth point cloud data into a robot coordinate system, and respectively acquiring a raised barrier and a recessed barrier according to the maximum threshold height of the raised barrier of the highest barrier and the minimum threshold height of the recessed barrier; s3, acquiring initial ground points, acquiring an intercept of a ground equation and an upper threshold and a lower threshold of a barrier of point cloud segmentation according to the initial ground points, and acquiring the type of each point in the initial ground point cloud data according to the dot product result of the initial ground point cloud data and a normal vector; and S4, taking the convex obstacles and the concave obstacles obtained in the step S2 and the convex obstacles and the concave obstacles obtained in the step S3 as the convex obstacles and the concave obstacles of the whole point cloud. The method can accurately acquire the obstacles of the point cloud and reduce the calculation amount.

Description

Obstacle identification method and device based on depth point cloud data and robot
Technical Field
The invention relates to the technical field of robots, in particular to a method and a device for identifying obstacles based on depth point cloud data and a robot.
Background
In the autonomous navigation process, the robot usually performs a path planning task from a current point to a target point, and at the moment, the robot needs to sense surrounding environment information and select a passable safe path.
The robot is usually provided with sensors such as a laser radar (generally, the laser radar used on the robot does not exceed 32 lines, and the radar with high line number has high cost), an IMU (inertial measurement unit), a speed measurement code disc and the like, but the mechanical laser radar has high vertical angle resolution, the line number is sparse when the vertical angle resolution is lower than 128 lines, the sensors are used for drawing construction or positioning under most conditions, and due to the fact that a blind area exists in the installation angle, the number of scanning points of small objects is small, even the scanning points cannot be scanned, and therefore the robot is not suitable for detecting small and low obstacles. However, in most of the operating scenarios of the robot, such as a park, a factory, a high-speed rail station, etc., some short obstacles that cannot be detected by the laser radar may appear on the patrol route of the robot, and at this time, the short obstacles need to be sensed by other sensors, so as to avoid obstacles.
At present, the robot mainly has the following three sensors for detecting obstacles:
1. an ultrasonic sensor: the ultrasonic sensor measures the distance to an obstacle in front of the sensor according to the ultrasonic distance measuring principle, and generally adopts single-point measurement. The ultrasonic sensor is limited by the characteristics of sound waves, the distance measurement distance is small except single-point measurement, the ultrasonic sensor is provided with a divergence angle, and the divergence angle is larger when the distance is larger, so that detailed environmental data cannot be obtained. In addition, in order to cover the blind area of the robot, the array arrangement is needed, the problem that the ultrasonic waves interfere with each other exists, and meanwhile, the cost of the robot is increased.
2. Solid or semi-solid laser radar: compared with a mechanical laser radar, a solid or semi-solid laser radar has a smaller horizontal field angle (generally, the mechanical laser radar can reach 360 degrees), but the vertical angle resolution is smaller than that of the mechanical laser radar, the obtained point cloud is denser, more detailed information can be captured, the point cloud is limited by the influence of laser power, is easily influenced by ambient light, and is not good in outdoor appearance.
3. Structured light camera: compared with the situation that the common RGB camera has strict requirements on the texture of the environment, the structured light camera actively emits light with coded information in a weak texture or non-texture environment so as to perform matching and depth calculation, and therefore the structured light camera is suitable for most scenes such as indoor and outdoor scenes. In addition, the point cloud density of the structured light sensor is also at the pixel level and is denser point cloud.
The background description provided herein is for the purpose of generally presenting the context of the disclosure. Unless otherwise indicated herein, the material described in this section is not prior art to the claims in this application and is not admitted to be prior art by inclusion in this section.
Disclosure of Invention
Aiming at the technical problems in the related art, the invention provides a method for identifying an obstacle based on depth point cloud data, which comprises the following steps:
s1, acquiring depth point cloud data of the structured light camera;
s2, converting the depth point cloud data into a robot coordinate system, and respectively acquiring a raised barrier and a recessed barrier according to the maximum threshold height of the raised barrier and the minimum threshold height of the recessed barrier;
s3, acquiring initial ground points, acquiring an intercept d of a ground equation and upper and lower limit thresholds of a barrier segmented by point cloud according to the initial ground points, and acquiring the type of each point in the preliminary ground point cloud data according to the dot product result of the preliminary ground point cloud data and a normal vector, wherein the types are ground points, raised barriers and sunken barriers;
and S4, taking the convex obstacles and the concave obstacles obtained in the step S2 and the convex obstacles and the concave obstacles obtained in the step S3 as the convex obstacles and the concave obstacles of the whole point cloud.
2. The method of claim 1, the step S3 further comprising:
s31, obtaining an initial ground point P according to the vertical distance of the ground height of the points of the initial ground point cloud data ground
S32, acquiring a ground normal vector and an intercept d of a ground equation according to the initial ground point;
s33, acquiring an upper threshold and a lower threshold of the obstacle segmented by the point cloud according to the intercept d of the ground equation;
s34, performing dot product on each point in the preliminary ground point cloud data and the normal vector to obtain a dot product result, and obtaining the type of each point in the preliminary ground point cloud data according to the dot product result, wherein the type is a ground point, a convex obstacle or a concave obstacle;
s35, the ground point P in the step S34 ground As the initial ground point of step S31, steps S31-S34 are repeatedly performed until a preset number of iterations is reached.
Specifically, the step 31 specifically includes: sorting the points of the preliminary ground point cloud data according to the vertical distance of the ground height from small to large, and selecting the first N points as initial ground points P ground Wherein N is a positive integer.
Specifically, the upper threshold of the obstacle is:
d threhold_up d + preset minimum height of the obstacle + point cloud data measurement noise;
the lower threshold of the obstacle is:
d threhold_down d-preset minimum height of obstacle-point cloud data measurement noise.
Specifically, according to the dot product result d n The method comprises the following steps of obtaining the type of each point in preliminary ground point cloud data, specifically:
d n ≥d threhold_up the point belonging to a raised obstacle P convex
d n ≤d threhold_down The point belonging to a recessed obstacle P concave
d n ≥d threhold_down &&d n ≤d threhold_up The point belongs to a ground point P ground
In a second aspect, another embodiment of the present invention discloses an obstacle identification apparatus based on depth point cloud data, which includes the following units:
the depth point cloud data acquisition unit is used for acquiring depth point cloud data of the structured light camera;
the preliminary obstacle classification unit is used for converting the depth point cloud data into a robot coordinate system and respectively acquiring a raised obstacle and a recessed obstacle according to the maximum threshold height of the raised obstacle and the minimum threshold height of the recessed obstacle;
the system comprises a subdivided barrier classification unit, a point cloud segmentation unit and a point cloud segmentation unit, wherein the subdivided barrier classification unit is used for acquiring initial ground points, acquiring an intercept d of a ground equation and an upper limit threshold and a lower limit threshold of a barrier segmented by point cloud according to the initial ground points, and acquiring the type of each point in the initial ground point cloud data according to the dot product result of the initial ground point cloud data and a normal vector, wherein the types are ground points, raised barriers and sunken barriers;
and the barrier classification unit is used for taking the convex barriers and the concave barriers obtained by the preliminary barrier classification unit and the convex barriers and the concave barriers obtained by the subdivided barrier classification unit as the convex barriers and the concave barriers of the whole point cloud.
Specifically, the subdivided obstacle classification unit further includes:
an initial ground point obtaining unit for obtaining an initial ground point P according to a vertical distance of the ground height of the points of the initial ground point cloud data ground
The ground normal vector and intercept acquisition unit is used for acquiring a ground normal vector and an intercept d of a ground equation according to the initial ground point;
the obstacle upper and lower limit threshold acquisition unit is used for acquiring an upper limit threshold and a lower limit threshold of the obstacle segmented by the point cloud according to the intercept d of the ground equation;
the obstacle type classification unit is used for performing dot product on each point in the preliminary ground point cloud data and the normal vector to obtain a dot product result, and obtaining the type of each point in the preliminary ground point cloud data according to the dot product result, wherein the types are ground points, raised obstacles and sunken obstacles;
an iteration unit for classifying the ground points P in the unit ground And (4) making an initial ground point of the initial ground point acquisition unit, and repeatedly executing the initial ground point acquisition unit, the ground normal vector and intercept acquisition unit, the barrier upper and lower limit threshold acquisition unit and the barrier type classification unit until the preset iteration times are reached.
Specifically, the initial ground point obtaining unit specifically includes: sorting the points of the preliminary ground point cloud data according to the vertical distance of the ground height from small to large, and selecting the first N points as initial ground points P ground Wherein N is a positive integer.
Specifically, the upper threshold of the obstacle is:
d threhold_up d + preset minimum height of the obstacle + point cloud data measurement noise;
the lower threshold of the obstacle is:
d threhold_down d-preset minimum height of obstacle-point cloud data measurement noise.
Specifically, according to the dot product result d n The method comprises the following steps of obtaining the type of each point in preliminary ground point cloud data, specifically:
d n ≥d threhold_up the point belonging to a raised obstacle P convex
d n ≤d threhold_down The point belonging to a recessed obstacle P concave
d n ≥d threhold_down &&d n ≤d threhold_up The point belongs to a ground point P ground
In a third aspect, another embodiment of the present invention discloses a robot, including: the system comprises a processor, a memory, a structured light camera, and instructions stored on the memory, wherein the processor is used for realizing the obstacle identification method based on the depth point cloud data when executing the instructions.
In a fourth aspect, another embodiment of the invention discloses a non-volatile storage medium having instructions stored thereon, which when executed by a processor, are configured to implement a method for obstacle identification based on depth point cloud data.
Firstly, preliminarily dividing point clouds into obvious obstacle point clouds and point clouds to be subdivided according to a maximum threshold and a minimum threshold; then, classifying the point cloud to be subdivided into ground point cloud, concave or convex point cloud by a method of repeatedly iterating to solve a normal vector and calculating the distance between a point and a plane; and finally, merging and summarizing to obtain the classification of each point in the point cloud. The method can accurately acquire the obstacles of the point cloud and reduce the calculation amount.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings required to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic view of a robot provided by an embodiment of the present invention;
fig. 2 is a schematic diagram of an obstacle identification method based on depth point cloud data according to an embodiment of the present invention;
fig. 3 is a schematic diagram of an initial extraction of a concave-convex obstacle according to an embodiment of the present invention;
fig. 4 is a schematic diagram of an obstacle recognition apparatus based on depth point cloud data according to an embodiment of the present invention;
fig. 5 is a schematic diagram of an obstacle identification device based on depth point cloud data according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived from the embodiments of the present invention by a person skilled in the art, are within the scope of the present invention.
Example one
Referring to fig. 1, fig. 1 is a schematic diagram of a robot of the present embodiment, a structured light camera 2 is mounted on a robot body 1, and depth point cloud data can be obtained by the structured light camera 2. In the figure O r And O c The representations represent the robot body coordinate system and the structured light camera coordinate system, respectively.
The robot comprises a central processing unit which is used for controlling and processing data of the robot.
External parameters of the structured light camera installed on the robot, including the installation height, the inclination angle, etc., can be measured when the robot leaves the factory, and are built in an application program or an operating system of the robot. Therefore, the relative position relation between the two coordinate systems of the robot coordinate system and the structured light camera coordinate system can be determined according to the external parameters of the structured light camera. Therefore, the data measured by the structured light camera can be transferred to the robot body coordinate system.
Referring to fig. 2, the present embodiment discloses an obstacle identification method based on depth point cloud data, which includes the following steps:
s1, acquiring depth point cloud data of the structured light camera;
specifically, the structured light camera of the embodiment acquires depth point cloud data of an environment, and transmits the acquired depth point cloud data to the central processing unit.
Further, in this embodiment, the original depth point cloud data is down-sampled to obtain down-sampled point cloud data.
Generally, the original depth point cloud data output by the structured light camera are dense and huge in data volume, and in order to reduce the data volume, the original depth point cloud is firstly subjected to down-sampling, so that the number of sampling points is reduced, and the calculation amount is reduced. More generally, the downsampling method of this embodiment may adopt a general point cloud downsampling method, and this embodiment is not further limited, and only the original depth point cloud data may be downsampled.
S2, converting the depth point cloud data into a robot coordinate system, and respectively acquiring a raised barrier and a recessed barrier according to the maximum threshold height of the raised barrier and the minimum threshold height of the recessed barrier;
as shown in fig. 3, the point cloud under the camera coordinate system is first transferred to the robot coordinate system. Suppose that the transformation matrix of the camera coordinate system and the robot coordinate system is
Figure BDA0003698065640000081
Then one measurement point P in the camera coordinate system c Expressed in the robot coordinate system as
Figure BDA0003698065640000082
And (3) obtaining measurement data under the robot body coordinate system after conversion according to the formula (1), and then performing initial segmentation on the measurement points.
As shown in fig. 3, the rough obstacle segmentation is performed according to the highest and lowest obstacle distances that the robot can pass through, and it is assumed that the maximum threshold height of the convex obstacle is equal to
Figure BDA0003698065640000083
The minimum threshold height of the debossed obstacle is
Figure BDA0003698065640000084
The maximum threshold height of the raised barrier of this embodiment is
Figure BDA0003698065640000085
The minimum threshold height of the debossed obstacle is
Figure BDA0003698065640000086
The robot is arranged according to certain obstacle crossing capability, for example, a short obstacle with the length of more than 3cm can be considered indoors, and a short obstacle with the length of more than 8cm can be considered outdoors. Similarly, the maximum threshold height is also set according to the actual obstacle crossing capability of the robot.
Specifically, the maximum threshold height above the raised barrier is set to
Figure BDA0003698065640000087
Is classified as a convex obstacle P convex
Classifying points below a minimum threshold as a recessed obstacle P concave
And S3, acquiring initial ground points, acquiring an intercept d of a ground equation and an upper limit threshold and a lower limit threshold of the obstacles segmented by the point cloud according to the initial ground points, and acquiring the type of each point in the initial ground point cloud data according to the dot product result of the initial ground point cloud data and a normal vector, wherein the types are ground points, raised obstacles and sunken obstacles.
After the raised obstacles and the recessed obstacles are obtained in step S2, the implementation removes points of the raised obstacles and the recessed obstacles to obtain preliminary ground point cloud data.
The specific step S3 includes:
step S31, obtaining an initial ground point P according to the vertical distance of the ground height of the points of the preliminary ground point cloud data ground
Sorting the points of the preliminary ground point cloud data according to the vertical distance of the ground height from small to large, and selecting the first N points as initial ground points P ground Wherein N is a positive integer.
Specifically, a preferred value of N in this embodiment is 100, 200.
S32, acquiring a ground normal vector and an intercept d of a ground equation according to the initial ground point;
according to the initial ground point P ground Obtaining the initial ground point P ground Mean value of (1) [ x ] mean y mean z mean ]And a covariance matrix;
SVD decomposition is carried out on the covariance matrix, and the eigenvector corresponding to the minimum eigenvalue is the ground direction quantity [ a b c ] of the fitting](ii) a Calculating the intercept d, d ═ a b c of the ground equation from the mean point and normal vector calculated from the initial ground point] T *[x mean y mean z mean ];
Thus, the fitted ground equation ax + by + cz + d is obtained as 0;
and S33, acquiring an upper threshold and a lower threshold of the obstacle of point cloud segmentation according to the intercept d of the ground equation.
Measuring a noise error range by point cloud data of a sensor to obtain an upper limit threshold and a lower limit threshold of an obstacle during point cloud segmentation:
d threhold_up d + preset minimum height of the obstacle + point cloud data measurement noise;
the lower threshold of the obstacle is:
d threhold_down d-preset minimum height of obstacle-point cloud data measurement noise. When the error of the point cloud data is +/-2.5 cm,
d threhold_up =d+2.5cm
d threhold_down =-d-2.5cm
when the preset minimum height of the obstacle is 5 cm:
d threhold_up =d+5cm+2.5cm
d threhold_down =-d-5cm-2.5cm。
s34, performing dot product on each point in the preliminary ground point cloud data and the normal vector to obtain a dot product result, and obtaining the type of each point in the preliminary ground point cloud data according to the dot product result, wherein the type is a ground point, a convex obstacle or a concave obstacle;
after the normal vector of the ground equation is obtained in step S32, each point [ x ] is obtained n y n z n ]And plane normal vector [ x mean y mean z mean ]Dot product of, i.e. d n =-[a b c] T *[x n y n z n ](ii) a Comparing the dot product result with the upper and lower limit thresholds to obtain the specific type of the point, which is the ground point, the convex obstacle, and the concave obstacle in this embodiment.
d n ≥d threhold_up The point belonging to a raised obstacle P convex
d n ≤d threhold_down The point belonging to a recessed obstacle P concave
d n ≥d threhold_down &&d n ≤d threhold_up The point belongs to a ground point P ground
S35, the ground point P in the step S34 ground Repeatedly executing the steps S31-S34 as the initial ground point of the step S31 until reaching the preset iteration number;
step S35, the ground point P in step S34 is set again ground The fitting of the ground normal vector estimation to the ground equation is performed again as the initial ground point of step S31. After the iteration reaches the specified times, a more accurate segmentation type is obtained.
And S4, taking the convex obstacles and the concave obstacles obtained in the step S2 and the convex obstacles and the concave obstacles obtained in the step S3 as the convex obstacles and the concave obstacles of the whole point cloud.
After step S3 is completed, the convex and concave obstacles initially segmented in step S2, the ground points further subdivided in step S3, and the convex and concave obstacles are combined in steps S2 and S3, so as to obtain the overall point cloud segmentation result.
According to the embodiment, firstly, point clouds are preliminarily divided into obvious obstacle point clouds and point clouds to be subdivided according to a maximum and minimum threshold; then, classifying the point cloud to be subdivided into ground point cloud, concave or convex point cloud by a method of repeatedly iterating to solve a normal vector and calculating the distance between a point and a plane; and finally, merging and summarizing to obtain the classification of each point in the point cloud. The method of the embodiment can accurately acquire the obstacle of the point cloud and reduce the calculation amount.
Example two
Referring to fig. 4, the present embodiment discloses an obstacle recognition apparatus based on depth point cloud data, which includes the following units:
the depth point cloud data acquisition unit is used for acquiring depth point cloud data of the structured light camera;
specifically, the structured light camera of the present embodiment acquires depth point cloud data of an environment, and transmits the acquired depth point cloud data to the central processing unit.
Further, in this embodiment, the original depth point cloud data is down-sampled to obtain down-sampled point cloud data.
Generally, the original depth point cloud data output by the structured light camera is dense and huge in data volume, and in order to reduce the data volume, the original depth point cloud is firstly subjected to down-sampling, so that the number of sampling points is reduced, and the calculation amount is reduced. More generally, the downsampling method of this embodiment may adopt a general point cloud downsampling method, and this embodiment is not further limited, and only the original depth point cloud data may be downsampled.
The preliminary obstacle classification unit is used for converting the depth point cloud data into a robot coordinate system and respectively acquiring a raised obstacle and a recessed obstacle according to the maximum threshold height of the raised obstacle and the minimum threshold height of the recessed obstacle;
as shown in fig. 3, the point cloud under the camera coordinate system is first transferred to the robot coordinate system. Assuming rotation of camera coordinate system and robot coordinate systemChange the matrix into
Figure BDA0003698065640000121
Then one measurement point P in the camera coordinate system c Expressed in the robot coordinate system as
Figure BDA0003698065640000122
And (3) obtaining measurement data under the robot body coordinate system after conversion according to the formula (1), and then performing initial segmentation on the measurement points.
As shown in fig. 3, the concave-convex obstacle division is performed preliminarily according to the highest and lowest obstacle distances that the robot can pass through, and the maximum threshold height of the convex obstacle is assumed to be
Figure BDA0003698065640000123
The minimum threshold height of the debossed obstacle is
Figure BDA0003698065640000124
The maximum threshold height of the raised barrier of this embodiment is
Figure BDA0003698065640000125
The minimum threshold height of the debossed obstacle is
Figure BDA0003698065640000126
The robot is arranged according to certain obstacle crossing capability, for example, a short obstacle with the distance of more than 3cm can be considered indoors, and a short obstacle with the distance of more than 8cm can be considered outdoors. Similarly, the maximum threshold height is also set according to the actual obstacle crossing capability of the robot.
Specifically, the maximum threshold height above the raised barrier is set to
Figure BDA0003698065640000127
Is classified as a convex obstacle P convex
Classifying points below a minimum threshold as a recessed obstacle P concave
And the subdivided obstacle classification unit is used for acquiring initial ground points, acquiring the intercept d of a ground equation and the upper limit threshold and the lower limit threshold of the obstacles segmented by point cloud according to the initial ground points, and acquiring the type of each point in the initial ground point cloud data according to the dot product result of the initial ground point cloud data and the normal vector, wherein the types are ground points, raised obstacles and sunken obstacles.
After the preliminary barrier classification unit obtains the convex barriers and the concave barriers, the implementation removes points of the convex barriers and the concave barriers to obtain preliminary ground point cloud data.
The specific subdivided obstacle classification unit further includes:
an initial ground point acquisition unit for acquiring an initial ground point P according to the vertical distance of the ground height of the points of the initial ground point cloud data ground
Sorting the points of the preliminary ground point cloud data according to the vertical distance of the ground height from small to large, and selecting the first N points as initial ground points P ground Wherein N is a positive integer.
Specifically, a preferred value of N in this embodiment is 100, 200.
The ground normal vector and intercept acquisition unit is used for acquiring a ground normal vector and an intercept d of a ground equation according to the initial ground point;
according to the initial ground point P ground Obtaining the initial ground point P ground Mean value of (1) [ x ] mean y mean z mean ]And a covariance matrix;
SVD decomposition is carried out on the covariance matrix, and the eigenvector corresponding to the minimum eigenvalue is the ground direction quantity [ a b c ] of the fitting](ii) a Calculating the intercept d, d ═ a b c of the ground equation from the mean point and normal vector calculated from the initial ground point] T *[x mean y mean z mean ];
Thus, the fitted ground equation ax + by + cz + d is obtained as 0;
and the barrier upper and lower limit threshold acquisition unit is used for acquiring the upper limit threshold and the lower limit threshold of the barrier segmented by the point cloud according to the intercept d of the ground equation.
Measuring a noise error range by point cloud data of a sensor to obtain an upper limit threshold and a lower limit threshold of an obstacle during point cloud segmentation:
d threhold_up d + preset minimum height of the obstacle + point cloud data measurement noise;
the lower threshold of the obstacle is:
d threhold_down d-preset minimum height of obstacle-point cloud data measurement noise. When the error of the point cloud data measurement is +/-2.5 cm,
d threhold_up =d+2.5cm
d threhold_down =-d-2.5cm
when the preset minimum height of the obstacle is 5 cm:
d threhold_up =d+5cm+2.5cm
d threhold_down =-d-5cm-2.5cm。
the obstacle type classification unit is used for performing dot product on each point in the preliminary ground point cloud data and the normal vector to obtain a dot product result, and obtaining the type of each point in the preliminary ground point cloud data according to the dot product result, wherein the types are ground points, raised obstacles and sunken obstacles;
after the normal vector of the ground equation is obtained in step S32, each point [ x ] is obtained n y n z n ]And plane normal vector [ x mean y mean z mean ]Dot product of, i.e. d n =-[a b c] T *[x n y n z n ](ii) a The dot product result is compared with an upper limit threshold and a lower limit threshold, so as to obtain the specific type of the point, which is a ground point, a convex obstacle and a concave obstacle in this embodiment.
d n ≥d threhold_up The point belonging to a raised obstacle P convex
d n ≤d threhold_down The point belonging to a recessed obstacle P concave
d n ≥d threhold_down &&d n ≤d threhold_up The point belongs to a ground point P ground
Iteration unit for classifying the ground points P in the unit ground And (4) making an initial ground point of the initial ground point acquisition unit, and repeatedly executing the initial ground point acquisition unit, the ground normal vector and intercept acquisition unit, the barrier upper and lower limit threshold acquisition unit and the barrier type classification unit until the preset iteration times are reached.
The iteration unit of this embodiment is used for classifying the ground point P in the unit again ground And (5) making initial ground points of the initial ground point acquisition unit, and re-performing ground normal vector estimation and ground equation fitting.
After the iteration reaches the specified times, a more accurate segmentation type is obtained.
And the barrier classification unit is used for taking the convex barriers and the concave barriers obtained by the preliminary barrier classification unit and the convex barriers and the concave barriers obtained by the subdivided barrier classification unit as the convex barriers and the concave barriers of the whole point cloud.
After the preliminary barrier classification unit is completed, combining the preliminary barrier classification unit and the subdivided barrier classification unit, collecting the convex and concave barriers preliminarily segmented in the preliminary barrier classification unit, the ground points further subdivided in the subdivided barrier classification unit, and summarizing the convex and concave barriers to obtain the overall point cloud segmentation result.
According to the embodiment, firstly, point clouds are preliminarily divided into obvious obstacle point clouds and point clouds to be subdivided according to a maximum and minimum threshold; then, classifying the point cloud to be subdivided into ground point cloud, concave or convex point cloud by a method of repeatedly iterating to solve a normal vector and calculating the distance between a point and a plane; and finally, merging and summarizing to obtain the classification of each point in the point cloud. The method of the embodiment can accurately acquire the obstacle of the point cloud and reduce the calculation amount.
EXAMPLE III
Referring to fig. 1, the present embodiment discloses a robot including: the system comprises a processor, a memory, a structured light camera and instructions stored on the memory, wherein the processor is used for realizing the obstacle identification method based on the depth point cloud data of the embodiment one when the instructions are executed.
EXAMPLE III
Referring to fig. 5, fig. 5 is a schematic structural diagram of an image enhancement apparatus of the present embodiment. The image enhancement device 20 of this embodiment comprises a processor 21, a memory 22 and a computer program stored in said memory 22 and executable on said processor 21. The processor 21 realizes the steps in the above-described method embodiments when executing the computer program. Alternatively, the processor 21 implements the functions of the modules/units in the above-described device embodiments when executing the computer program.
Illustratively, the computer program may be divided into one or more modules/units, which are stored in the memory 22 and executed by the processor 21 to accomplish the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program in the image enhancement device 20. For example, the computer program may be divided into the modules in the second embodiment, and for the specific functions of the modules, reference is made to the working process of the apparatus in the foregoing embodiment, which is not described herein again.
The image enhancement device 20 may include, but is not limited to, a processor 21, a memory 22. It will be appreciated by those skilled in the art that the schematic diagram is merely an example of the image enhancement device 20 and is not intended to limit the image enhancement device 20 and may include more or less components than those shown, or some components may be combined, or different components, for example, the image enhancement device 20 may also include an input-output device, a network access device, a bus, etc.
The Processor 21 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, and the processor 21 is the control center of the image enhancing apparatus 20, and various interfaces and lines are used to connect the various parts of the entire image enhancing apparatus 20.
The memory 22 may be used for storing the computer programs and/or modules, and the processor 21 may implement various functions of the image enhancement device 20 by running or executing the computer programs and/or modules stored in the memory 22 and calling data stored in the memory 22. The memory 22 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. In addition, the memory 22 may include high-speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
Wherein, the integrated module/unit of the image enhancement device 20 can be stored in a computer readable storage medium if it is implemented in the form of software functional unit and sold or used as a stand-alone product. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer readable storage medium, and when the computer program is executed by the processor 21, the steps of the method embodiments described above may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer-readable medium may contain suitable additions or subtractions depending on the requirements of legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer-readable media may not include electrical carrier signals or telecommunication signals in accordance with legislation and patent practice.
It should be noted that the above-described device embodiments are merely illustrative, where the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may also be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. In addition, in the drawings of the embodiment of the apparatus provided by the present invention, the connection relationship between the modules indicates that there is a communication connection between them, and may be specifically implemented as one or more communication buses or signal lines. One of ordinary skill in the art can understand and implement it without inventive effort.
The above description is only exemplary of the present invention and should not be taken as limiting the invention, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A method for identifying obstacles based on depth point cloud data comprises the following steps:
s1, acquiring depth point cloud data of the structured light camera;
s2, converting the depth point cloud data into a robot coordinate system, and respectively acquiring a raised barrier and a recessed barrier according to the maximum threshold height of the raised barrier of the highest barrier and the minimum threshold height of the recessed barrier;
s3, acquiring initial ground points, acquiring an intercept d of a ground equation and an upper limit threshold and a lower limit threshold of a barrier segmented by point cloud according to the initial ground points, and acquiring the type of each point in the preliminary ground point cloud data according to the dot product result of the preliminary ground point cloud data and a normal vector, wherein the types are ground points, raised barriers and sunken barriers;
and S4, taking the convex obstacles and the concave obstacles obtained in the step S2 and the convex obstacles and the concave obstacles obtained in the step S3 as the convex obstacles and the concave obstacles of the whole point cloud.
2. The method of claim 1, the step S3 further comprising:
s31, obtaining an initial ground point P according to the vertical distance of the ground height of the points of the initial ground point cloud data ground
S32, acquiring a ground normal vector and an intercept d of a ground equation according to the initial ground point;
s33, acquiring an upper threshold and a lower threshold of the obstacle segmented by the point cloud according to the intercept d of the ground equation;
s34, performing dot product on each point in the preliminary ground point cloud data and the normal vector to obtain a dot product result, and obtaining the type of each point in the preliminary ground point cloud data according to the dot product result, wherein the type is a ground point, a convex obstacle or a concave obstacle;
s35, the ground point P in the step S34 ground As the initial ground point of step S31, steps S31-S34 are repeatedly performed until a preset number of iterations is reached.
3. The method according to claim 2, wherein the step 31 comprises: the preliminary ground surfaceThe points of the point cloud data are sorted according to the vertical distance of the ground height from small to large, and the first N points are selected as initial ground points P ground Wherein N is a positive integer.
4. The method of claim 2, the upper threshold for the obstacle being:
d threhold_up d + preset minimum height of the obstacle + point cloud data measurement noise;
the lower threshold of the obstacle is:
d threhold_down d-preset minimum height of obstacle-point cloud data measurement noise.
5. The method of claim 4, according to the dot product result d n The method comprises the following steps of obtaining the type of each point in preliminary ground point cloud data, specifically:
d n ≥d threhold_up the point belonging to a raised obstacle P convex
d n ≤d threhold_down The point belonging to a recessed obstacle P concave
d n ≥d threhold_down &&d n ≤d threhold_up The point belongs to a ground point P ground
6. An obstacle recognition device based on depth point cloud data comprises the following units:
the depth point cloud data acquisition unit is used for acquiring depth point cloud data of the structured light camera;
the preliminary obstacle classification unit is used for converting the depth point cloud data into a robot coordinate system and respectively acquiring a raised obstacle and a sunken obstacle according to the maximum threshold height of the raised obstacle of the highest obstacle and the minimum threshold height of the sunken obstacle;
the system comprises a subdivided barrier classification unit, a ground point calculation unit and a point cloud segmentation unit, wherein the subdivided barrier classification unit is used for acquiring initial ground points, acquiring an intercept d of a ground equation and an upper limit threshold and a lower limit threshold of a barrier segmented by point cloud according to the initial ground points, and acquiring the type of each point in initial ground point cloud data according to the dot product result of the initial ground point cloud data and a normal vector, wherein the types are ground points, raised barriers and depressed barriers;
and the barrier classification unit is used for taking the convex barriers and the concave barriers obtained by the preliminary barrier classification unit and the convex barriers and the concave barriers obtained by the subdivided barrier classification unit as the convex barriers and the concave barriers of the whole point cloud.
7. The apparatus of claim 6, the subdivided obstacle classification unit further comprising:
an initial ground point obtaining unit for obtaining an initial ground point P according to a vertical distance of the ground height of the points of the initial ground point cloud data ground
The ground normal vector and intercept acquisition unit is used for acquiring a ground normal vector and an intercept d of a ground equation according to the initial ground point;
the obstacle upper and lower limit threshold acquisition unit is used for acquiring an upper limit threshold and a lower limit threshold of the obstacle segmented by the point cloud according to the intercept d of the ground equation;
the obstacle type classification unit is used for performing dot product on each point in the preliminary ground point cloud data and the normal vector to obtain a dot product result, and obtaining the type of each point in the preliminary ground point cloud data according to the dot product result, wherein the types are ground points, raised obstacles and sunken obstacles;
an iteration unit for classifying the ground point P in the obstacle type classification unit ground And (4) making an initial ground point of the initial ground point acquisition unit, and repeatedly executing the initial ground point acquisition unit, the ground normal vector and intercept acquisition unit, the barrier upper and lower limit threshold acquisition unit and the barrier type classification unit until the preset iteration times are reached.
8. The apparatus of claim 7 wherein the initial ground point acquisition unit specifically comprises: the points of the preliminary ground point cloud data are taken according to the ground heightThe vertical distances of the first N points are sorted from small to large, and the first N points are selected as initial ground points P ground Wherein N is a positive integer.
9. The apparatus of claim 8, the upper threshold of the obstacle being:
d threhold_up d + preset minimum height of the obstacle + point cloud data measurement noise;
the lower threshold of the obstacle is:
d threhold_down d-preset minimum height of obstacle-point cloud data measurement noise.
10. The apparatus of claim 9, according to the dot product result d n The method comprises the following steps of obtaining the type of each point in preliminary ground point cloud data, specifically:
d n ≥d threhold_up the point belonging to a raised obstacle P convex
d n ≤d threhold_down The point belonging to a recessed obstacle P concave
d n ≥d threhold_down &&d n ≤d threhold_up The point belongs to a ground point P ground
CN202210680286.7A 2022-06-16 2022-06-16 Obstacle identification method and device based on depth point cloud data and robot Pending CN115100229A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210680286.7A CN115100229A (en) 2022-06-16 2022-06-16 Obstacle identification method and device based on depth point cloud data and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210680286.7A CN115100229A (en) 2022-06-16 2022-06-16 Obstacle identification method and device based on depth point cloud data and robot

Publications (1)

Publication Number Publication Date
CN115100229A true CN115100229A (en) 2022-09-23

Family

ID=83290970

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210680286.7A Pending CN115100229A (en) 2022-06-16 2022-06-16 Obstacle identification method and device based on depth point cloud data and robot

Country Status (1)

Country Link
CN (1) CN115100229A (en)

Similar Documents

Publication Publication Date Title
CN110988912B (en) Road target and distance detection method, system and device for automatic driving vehicle
Nabati et al. Rrpn: Radar region proposal network for object detection in autonomous vehicles
US6826293B2 (en) Image processing device, singular spot detection method, and recording medium upon which singular spot detection program is recorded
Zhou et al. Self‐supervised learning to visually detect terrain surfaces for autonomous robots operating in forested terrain
Yokoyama et al. Pole-like objects recognition from mobile laser scanning data using smoothing and principal component analysis
US11556745B2 (en) System and method for ordered representation and feature extraction for point clouds obtained by detection and ranging sensor
EP1959675A2 (en) Detection device, method and program thereof
CN110799989A (en) Obstacle detection method, equipment, movable platform and storage medium
CN112513679B (en) Target identification method and device
Yao et al. Comparison of two methods for vehicle extraction from airborne LiDAR data toward motion analysis
US20220236075A1 (en) Map construction device and method thereof
Zelener et al. Cnn-based object segmentation in urban lidar with missing points
Konrad et al. Localization in digital maps for road course estimation using grid maps
CN111915657A (en) Point cloud registration method and device, electronic equipment and storage medium
CN115240149A (en) Three-dimensional point cloud detection and identification method and device, electronic equipment and storage medium
CN114972968A (en) Tray identification and pose estimation method based on multiple neural networks
CN112106111A (en) Calibration method, calibration equipment, movable platform and storage medium
CN114119659A (en) Multi-sensor fusion target tracking method
CN113112491A (en) Cliff detection method and device, robot and storage medium
Elseberg et al. Full wave analysis in 3D laser scans for vegetation detection in urban environments
CN114219770A (en) Ground detection method, ground detection device, electronic equipment and storage medium
Wang et al. 3D-LIDAR based branch estimation and intersection location for autonomous vehicles
CN117496515A (en) Point cloud data labeling method, storage medium and electronic equipment
CN115100229A (en) Obstacle identification method and device based on depth point cloud data and robot
CN115272899A (en) Risk early warning method and device, aircraft and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination