CN113609914B - Obstacle recognition method and device and vehicle control system - Google Patents

Obstacle recognition method and device and vehicle control system Download PDF

Info

Publication number
CN113609914B
CN113609914B CN202110779716.6A CN202110779716A CN113609914B CN 113609914 B CN113609914 B CN 113609914B CN 202110779716 A CN202110779716 A CN 202110779716A CN 113609914 B CN113609914 B CN 113609914B
Authority
CN
China
Prior art keywords
point
bounding box
obstacle
convex hull
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110779716.6A
Other languages
Chinese (zh)
Other versions
CN113609914A (en
Inventor
郭旭东
万国强
吴劲松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingwei Hirain Tech Co Ltd
Original Assignee
Beijing Jingwei Hirain Tech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingwei Hirain Tech Co Ltd filed Critical Beijing Jingwei Hirain Tech Co Ltd
Priority to CN202110779716.6A priority Critical patent/CN113609914B/en
Publication of CN113609914A publication Critical patent/CN113609914A/en
Application granted granted Critical
Publication of CN113609914B publication Critical patent/CN113609914B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a method, a device and a vehicle control system for identifying an obstacle, wherein the method comprises the steps of carrying out type distinction on an obstacle point cloud cluster by adopting a trained classifier, calculating to obtain an initial bounding box of the obstacle point cloud cluster, determining the main direction of the initial bounding box after calculating to obtain the initial bounding box, correcting the main direction of the initial bounding box, taking the bounding box with the corrected main direction as a target bounding box, determining the main direction of the target bounding box, calculating the course angle of the obstacle according to the target bounding box and the classification result of the classifier, and solving the problems of shaking of the calculated bounding box course angle and 90 degrees of difference between the calculated bounding box course angle and the true course angle of the obstacle.

Description

Obstacle recognition method and device and vehicle control system
Technical Field
The invention relates to the technical field of intelligent driving, in particular to a method and a device for identifying obstacles and a vehicle control system.
Background
In the method for identifying the obstacle by the automatic driving vehicle, peripheral convex hull points are extracted mainly based on point cloud clusters, a directional bounding box is calculated based on the detected convex hull points, and the longest side direction of the directional bounding box is selected as an obstacle course angle.
The traditional calculation of the obstacle in the harbor scene based on the convex hull point mainly has the following problems: firstly, as most of vehicles in a harbor scene are trucks, the shapes of trailers are irregular, and the convex hull points between two adjacent frames are changed greatly, so that the jumping of a directed bounding box is large, and the position and the size information of the outputted obstacle can shake to a certain extent, thereby influencing the decision planning of the automatic driving truck; secondly, under the condition that the distance between the obstacle and the vehicle is far, the scanned obstacle point cloud is sparse, so that the reliability of the bounding box calculated according to the convex hull point is low; thirdly, under the following condition, the long-side course angle of the directional bounding box obtained according to the tail right in front of the vehicle is different from the real course angle of the obstacle vehicle by 90 degrees, so that the reliable calculation method capable of adapting to the course angle in the port scene is provided, and one of the technical problems to be solved by the person in the art is urgent.
Disclosure of Invention
In view of this, the embodiments of the present invention provide a method, an apparatus, and a system for identifying an obstacle, so as to implement reliable calculation of a heading angle in a harbor scene.
In order to achieve the above object, the embodiment of the present invention provides the following technical solutions:
A method of identifying an obstacle, comprising:
Acquiring point cloud data in a view field of the point cloud acquisition equipment through the point cloud acquisition equipment;
clustering the point cloud data to obtain an obstacle point cloud cluster;
classifying the shape of the obstacle point cloud cluster by using a classifier;
calculating a convex hull point set of the obstacle point cloud cluster;
Calculating a convex hull point set of the obstacle point cloud cluster and a minimum circumscribed rectangle of the convex hull point set, and taking the minimum circumscribed rectangle as an initial bounding box of the convex hull point set;
Correcting the direction of the initial bounding box, and taking the corrected initial bounding box as a target bounding box;
and calculating the course angle of the obstacle according to the target bounding box and the classification result of the classifier.
Optionally, in the above method for identifying an obstacle, correcting the direction of the initial bounding box includes:
determining reference points of the convex hull point set;
Determining a set of direction vectors of the initial bounding box based on the reference point, wherein the set of direction vectors comprises a plurality of direction vectors;
calculating two-dimensional median points of the convex hull point set;
calculating a main vector of an initial bounding box by taking the two-dimensional median point as a reference point;
Calculating a rotation angle based on the main vector and the course angle of the initial bounding box;
and correcting the angle of the initial bounding box based on the rotation angle to obtain a target bounding box.
Optionally, in the above method for identifying an obstacle, the reference point includes: and the maximum value point and the minimum value point are two points with the largest distances in the convex hull point set, wherein the point far away from the point cloud acquisition equipment in the two points is the maximum value point, and the point near to the point cloud acquisition equipment in the two points is the minimum value point.
Optionally, in the above obstacle identifying method, the reference points further include corner points, and the corner point p 3 is based on a formulaCalculating to obtain;
The distance from any convex hull point p in the convex hull point set to a straight line formed by connecting a maximum value point p h and a minimum value point p l is shown as d (l pl,ph, p), the foot pl,ph (p) is a perpendicular foot point projected on a straight line formed by connecting a maximum value point p h and a minimum value point p l by any convex hull point p in the convex hull point set, lambda is a preset coefficient, ||footpl,ph(p)-Pe||2=min(footpl,ph(p)-ph,footpl,ph(p)-pl), is shown as CH (p) and p epsilon CH (p) is shown as any convex hull point in the convex hull point set.
Optionally, in the above obstacle identifying method, calculating a two-dimensional median point of the convex hull point set includes:
Calculating p median=median(ω·p)pmedian based on a formula;
Judging whether the value of i is greater than 1;
If the value of i is less than 1, ω k is calculated based on the formula ω k=||pmedian-pk |, k=1, 2,3 … … n;
Substituting the calculated ω k into the formula p median =mean (ω·p), updating p median, controlling the value of i to increase by a preset step size, and continuing to perform the actions: judging whether the value of i is greater than 1;
When the value of i is greater than 1, outputting p median as a two-dimensional median point of the convex hull point set;
Wherein ω is a weight coefficient corresponding to each convex hull point, and p k, =1, 2,3, … … n represents each convex hull point in the convex hull point set.
Optionally, in the above obstacle identifying method, determining the set of direction vectors of the initial bounding box based on the reference point includes:
Based on the formula And calculating to obtain a main vector l of the initial bounding box.
Optionally, in the above obstacle identifying method, calculating the rotation angle based on the principal vector and the course angle of the initial bounding box includes:
Based on the formula And calculating a rotation angle delta alpha, wherein the beta is a course angle of the initial bounding box, wherein l y represents a projection length of a main vector l on a y axis in a coordinate system, l x represents a projection length of the main vector l on an x axis in the coordinate system, and the coordinate system is a coordinate system established by taking a centroid point of an obstacle point cloud cluster as an origin.
Optionally, in the above method for identifying an obstacle, calculating a heading angle of the obstacle according to a target bounding box and a classification result of the classifier includes:
When the classification result of the classifier indicates that the type of the point cloud cluster is L-shaped, the calculated course angle of the long side of the target bounding box is directly used as the course angle of the obstacle;
And when the classification result of the classifier indicates that the type of the point cloud cluster is I type, selecting the calculated short side direction of the target bounding box as the course angle of the obstacle.
An obstacle recognition device, comprising:
the acquisition unit is used for acquiring point cloud data in the view field of the point cloud acquisition equipment through the point cloud acquisition equipment;
The classification unit is used for clustering the point cloud data to obtain obstacle point cloud clusters;
The classifier is used for classifying the shape of the obstacle point cloud cluster;
The course angle identification unit is used for calculating a convex hull point set of the obstacle point cloud cluster and a minimum circumscribed rectangle of the convex hull point set, and taking the minimum circumscribed rectangle as an initial bounding box of the convex hull point set; correcting the direction of the initial bounding box, and taking the corrected initial bounding box as a target bounding box; and calculating the course angle of the obstacle according to the target bounding box and the classification result of the classifier.
A vehicle control system to which the obstacle identifying apparatus according to any one of the above is applied.
Based on the technical scheme, the method provided by the embodiment of the invention includes the steps of performing type distinction on the obstacle point cloud cluster by adopting the trained classifier, calculating to obtain the initial bounding box of the obstacle point cloud cluster, determining the main direction of the initial bounding box after calculating to obtain the initial bounding box, correcting the main direction of the initial bounding box, using the bounding box with the corrected main direction as the target bounding box, determining the main direction of the target bounding box, and calculating the course angle of the obstacle according to the target bounding box and the classification result of the classifier, thereby solving the problems of the calculated bounding box course angle jitter and the difference of 90 degrees with the true course angle of the obstacle.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for identifying an obstacle according to an embodiment of the present application;
FIG. 2 is a schematic view of an acquired obstacle point cloud cluster;
FIG. 3 is a schematic diagram of an analysis process of the classifier on the schematic diagram of the obstacle point cloud cluster;
FIG. 4 is a schematic diagram of calculating a convex hull point set of the obstacle point cloud cluster;
FIG. 5 is a schematic diagram of the structure of the calculated initial bounding box;
FIG. 6 is a schematic diagram of a calibration procedure performed in the direction of the initial bounding box;
FIG. 7 is a schematic diagram of the locations of maximum and minimum points in a convex hull point set;
FIG. 8 is a flow chart of the calculation of two-dimensional median points for a convex hull point set;
FIG. 9 is a schematic view of the projection of the principal vector l on the X/Y axis;
FIG. 10 is a schematic diagram of the structure of a target bounding box;
FIG. 11 is a schematic view of a direction of a heading angle when a point cloud cluster is L-shaped;
FIG. 12 is a schematic direction diagram of a heading angle when the type of the point cloud cluster is type I;
fig. 13 is a schematic structural diagram of an obstacle identifying apparatus according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In order to solve the problem of recognizing a container truck in a harbor environment, the application designs a point cloud type analysis method and a more stable bounding box generation algorithm aiming at the characteristic shape and size characteristics of the container truck, and referring to fig. 1, the obstacle recognition method disclosed by the embodiment of the application comprises the following steps:
step S101: and acquiring point cloud data in the view field of the point cloud acquisition equipment through the point cloud acquisition equipment.
In the scheme, the point cloud acquisition equipment can be a laser radar installed on a vehicle, the vehicle can be an automatic driving engineering vehicle or a passenger vehicle, and the engineering vehicle or the passenger vehicle acquires three-dimensional point cloud information in the current environment through the laser radars installed on the head and two sides of the engineering vehicle or the passenger vehicle to obtain point cloud data in the whole scene. For example, the vehicle may be a truck or trailer or the like that transports containers in a port. For example, the vehicle may be a container truck.
Step S102: and clustering the point cloud data to obtain an obstacle point cloud cluster.
For example, in an example, after the point cloud data acquired by the point cloud acquisition device is acquired, the point cloud data is clustered by using an euclidean clustering algorithm to obtain obstacle point cloud clusters corresponding to different obstacles. For example, fig. 2 shows a schematic diagram of an "L" shaped obstacle point cloud cluster.
Step S103: and classifying the shape of the obstacle point cloud cluster by using a classifier.
For example, in one example, the obstacle point cloud clusters are shape classified using a trained classifier. For example, the obstacle point cloud clusters are classified into L-class obstacle point cloud clusters and I-class obstacle point cloud clusters based on shape and size characteristics of the obstacle point cloud clusters.
In order to solve the problem that the laser radar only scans the tail of the vehicle when an obstacle is right in front of the vehicle, and the calculated long-side course angle of the bounding box is 90 degrees different from the course angle of the obstacle vehicle, the obstacle point cloud clusters are required to be classified, and are classified into L-type or I-type. For example, in an example, the obstacle point cloud clusters are loaded into a classifier, as shown in fig. 3, the classifier calculates shape and size characteristics of the clustered obstacle point cloud clusters, a coordinate system is established by using centroid points of each obstacle point cloud cluster, Δx and Δy of the obstacle point cloud clusters along the x direction and along the y direction under the coordinate system are calculated, and three mathematical characteristic values λ 1、λ2 and λ 3 of the obstacle point cloud clusters are obtained by means of three-dimensional point cloud principal component analysis. It should be noted that, three mathematical eigenvalues are obtained by adopting a principal component analysis (PCA, PRINCIPAL COMPONENT ANALYSIS) method, the basic principle of PCA is that the direction with the largest variance in the point cloud cluster is calculated first as an eigenvector to obtain a corresponding eigenvalue, and then two eigenvectors with the largest variance of two planes orthogonal to the eigenvalue are calculated in sequence to obtain another two eigenvalues. The PCA method is a point cloud data analysis method, shape characteristics of the obstacle point cloud cluster, such as linearity, flatness, sphericity, curvature change and the like, are further calculated based on the three mathematical characteristic values, and selected point cloud shape characteristic combinations and calculation modes thereof are shown in table 1.
TABLE 1 Point cloud shape feature combinations and computing methods therefor
Inputting the segmented obstacle point cloud clusters into a designed support vector machine (support vector machines, SVM) classifier, and classifying the obstacle point cloud clusters according to the shape characteristics of the obstacle point cloud clusters calculated in the previous step by the classifier. For example, the obstacle point cloud clusters are classified as I-type or L-type.
For example, in the disclosed embodiments, the classifier is a classifier after training based on the point cloud cluster dataset. For example, in the process of training the classifier, the SVM classifier can be trained by collecting 5000 obstacle point cloud cluster data sets, and 3000 point cloud cluster data sets are used for testing the trained classifier, so that the classifier with the classification result meeting the preset precision requirement is obtained. For example, the test result shows that the accuracy rate of classifying the L-shaped point cloud cluster according to the point cloud shape characteristic combination of the point cloud cluster data set finally reaches 96%, and the accuracy rate of classifying the I-shaped point cloud cluster reaches 90%.
Step S104: and calculating a convex hull point set of the obstacle point cloud cluster.
For example, after obtaining the obstacle point cloud cluster, computing each convex hull point of the obstacle point cloud cluster by adopting a Graham algorithm in computer geometry, and constructing a convex hull point set corresponding to the obstacle point cloud cluster, wherein the convex hull point set is shown in fig. 4.
Step S105: and calculating the minimum circumscribed rectangle of the convex hull point set, and taking the minimum circumscribed rectangle as an initial bounding box of the convex hull point set.
In step S104, the convex hull point set is obtained as shown in fig. 4, a circumscribed rectangle with the smallest area is selected from the convex hull point set as an initial bounding box, and when the circumscribed rectangle with the smallest area of the convex hull point set is calculated, a calculation method in the related art may be adopted, and the calculation process is not described too much, and the calculated initial bounding box is shown in fig. 5.
Step S106: correcting the direction of the initial bounding box, and taking the corrected initial bounding box as a target bounding box.
Referring to fig. 6, the present step may specifically include step S1061 to step S1065.
Step S1061: and determining the reference point of the convex hull point set.
For example, in one embodiment, before correcting the direction of the initial bounding box, reference points in the set of convex hull points need to be calculated. The reference points comprise maximum value points, minimum value points and corner points in the convex hull point set.
And the maximum value point and the minimum value point are two points with the largest distances in the convex hull point set. For example, a point farther from the point cloud collection device out of the two points is a maximum point, and a point closer to the point cloud collection device out of the two points is a minimum point.
In the embodiment of the present disclosure, when calculating the maximum value point and the minimum value point in the convex hull point set, the maximum value point and the minimum value point may be calculated by the following formulas: d (ph,pl)=max{d(pi,pj)|pi,pj epsilon CH (p) }, wherein d is the Euclidean distance between two convex hull points under the coordinate system, p i,pj is any two convex hull points in the convex hull point set, CH (p) is the convex hull point set, after finding out two points with the largest Euclidean distance in the convex hull points according to the above formula, respectively judging the Euclidean distance between the two points and the point of the cloud acquisition equipment O in the coordinate system, wherein the point with the large Euclidean distance is the maximum point p h, the point with the small distance is the minimum point p l, and the obtained maximum point and minimum point are shown in figure 7.
The corner point p 3 is selected from the convex hull point set based on the maximum value point and the minimum value point. The calculation formula of the corner point p 3 is as follows.
The distance from any convex hull point p in the convex hull point set to a straight line formed by connecting a maximum value point p h and a minimum value point p l is the distance from any convex hull point p in the convex hull point set to a straight line formed by connecting a maximum value point p h and a minimum value point p l, lambda is a preset coefficient, the lambda value is a preset value, for example, can be 0.01, ||footpl,ph(p)-Pe||2=min((footpl,ph(p)-pl,(footpl,ph(p)-ph))), is determined by the norms from the perpendicular foot point to two extreme points (the maximum value point p h and the minimum value point p l), the norms from the perpendicular foot points to the two extreme points are compared, and the norms of the perpendicular foot points are selected smaller. In the above formula, CH (p) represents a set of convex hull points, and p e CH (p) represents any one of the set of convex hull points.
Step S1062: a set of direction vectors of the initial bounding box is determined based on the reference point, wherein the set of direction vectors includes a plurality of direction vectors.
After the maximum point p h, the minimum point p l and the corner point p 3 are obtained through calculation, the direction vector set of the initial bounding box can be obtained. The direction vector set comprises long-side direction vectors of the initial bounding boxShort side direction vector/>, of the initial bounding boxAngular line direction vector/>, of the initial bounding box
Step S1063: and calculating a two-dimensional median point of the convex hull point set.
As can be seen from fig. 5, the heading angle (the direction corresponding to the angle β) of the initial bounding box cannot fit the direction of the obstacle point cloud cluster, and therefore, the main direction of the initial bounding box needs to be corrected, so that the heading angle corresponding to the bounding box after the main direction correction can fit the direction of the obstacle point cloud cluster.
For example, in some embodiments, the primary direction of the initial bounding box may be corrected by a two-dimensional median point in the convex hull points, and in calculating the two-dimensional median point, the calculation process may be as shown in fig. 8, including steps S501-S506.
Step S501: and calculating to obtain a candidate average value based on a preset formula. For example, the candidate average value p median is calculated based on the formula p median =mean (ω·p).
Step S502: it is determined whether the value of i is greater than 1. If it is smaller than 1, the process proceeds to step S503, and if it is larger than 1, the process proceeds to step S506.
Step S503: if the value of i is less than 1, ω k is calculated based on the formula ω k=||pmedian-pk |, k=1, 2,3 … … n.
It should be noted that, P k may refer to a convex hull point in the initial bounding box.
Step S504: substituting ω k calculated into the formula p median =mean (ω·p), and updating p median.
Step S505: the value of control i is increased by a preset step size, and step S502 is continuously executed: it is determined whether the value of i is greater than 1.
For example, the value of the preset step may be 1 or other, which is not limited in the embodiment of the disclosure.
Step S506: when the value of i is greater than 1, p median is output as the two-dimensional median point of the convex hull point set.
For example, the followingThe method is characterized in that the method is a weight coefficient corresponding to each convex hull point, p k, =1, 2,3 and … … n represents each convex hull point in the convex hull point set, and (omega.p) is a two-dimensional median point combining an x value and a y value in the convex hull points. For example, the two-dimensional median point obtained by the final calculation is shown in fig. 7.
Step S1064: and calculating a main vector of the initial bounding box by taking the two-dimensional median point as a reference point.
Specifically, the formula can be usedThe principal vector l is calculated. Can be inAnd selecting a vector with the minimum modular value of the vector distance and the vector from the two-dimensional median point p median in the set.
Step S1065: after the principal vector l is calculated, the angle Δα at which the principal vector l needs to be rotated needs to be calculated, specifically, the angle Δα can be calculated by the formulaTo calculate the angle Δα at which the bounding box main vector l needs to rotate, where β is the heading angle of the initial bounding box, as shown in fig. 9, l y is the length of the projection on the y-axis in the main vector l coordinate system, and l x is the length of the projection on the x-axis in the main vector l coordinate system.
Step S1066: correcting the angle of the main direction of the initial bounding box based on the delta alpha, namely, rotating the initial bounding delta alpha, taking the length of the bounding box as the distance between two points farthest from the edge, taking the width of the bounding box as the vertical distance between the points farthest from the edge, and taking the calculated bounding box as a target bounding box as shown in fig. 10.
Step S107: and calculating the course angle of the obstacle according to the corrected main direction and the classification result of the classifier.
For example, in some embodiments, the bounding box after the correction of the main direction is taken as a target bounding box, and when the classification result of the classifier indicates that the type of the point cloud cluster is L-shaped, as shown in fig. 11, the calculated heading angle of the long side of the target bounding box is directly taken as the heading angle of the obstacle.
As shown in fig. 12, when the classification result of the classifier indicates that the type of the point cloud cluster is I-type, the calculated short-side direction of the target bounding box is selected as the heading angle of the obstacle.
According to the scheme, the trained classifier is adopted to conduct type discrimination on the obstacle point cloud clusters, an initial bounding box of the obstacle point cloud clusters is obtained through calculation, after the initial bounding box is obtained through calculation, the main direction of the initial bounding box is determined, then the main direction of the initial bounding box is corrected, the bounding box with the corrected main direction is used as a target bounding box, the main direction of the target bounding box is determined, the course angle of the obstacle is calculated according to the target bounding box and the classification result of the classifier, and the problems that the calculated bounding box is high in course angle shake and is 90 degrees different from the true course angle of the obstacle are solved.
In this embodiment, please refer to the content of the foregoing method embodiment, the description of the obstacle identifying apparatus provided in the embodiment of the present application is given below, and the obstacle identifying apparatus described below and the obstacle identifying method described above may be referred to correspondingly.
Referring to fig. 13, the obstacle recognizing apparatus of the present disclosure includes:
The acquisition unit 100 corresponds to step S101 in the above method, where the acquisition unit 100 is configured to acquire, by using a point cloud acquisition device, point cloud data in a field of view of the point cloud acquisition device;
the classification unit 200, corresponding to step S102 in the above method, is configured to cluster the point cloud data to obtain an obstacle point cloud cluster;
a classifier 300, corresponding to step S103 in the above method, for classifying the shape of the obstacle point cloud cluster;
The course angle identifying unit 400 is corresponding to step S104-step S107 in the above method, and is configured to calculate a convex hull point set of the obstacle point cloud cluster and a minimum bounding rectangle of the convex hull point set, and take the minimum bounding rectangle as an initial bounding box of the convex hull point set; correcting the direction of the initial bounding box, and taking the corrected initial bounding box as a target bounding box; and calculating the course angle of the obstacle according to the target bounding box and the classification result of the classifier.
In the specific embodiments of the steps in the course angle identifying unit 400, please refer to the above method embodiment, and not described in any more detail.
Corresponding to the device, the application also discloses a vehicle control system and a vehicle using the vehicle control system, wherein the obstacle identifying device according to any one of the embodiments of the application is applied to the vehicle control system.
For convenience of description, the above system is described as being functionally divided into various modules, respectively. Of course, the functions of each module may be implemented in the same piece or pieces of software and/or hardware when implementing the present application.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for a system or system embodiment, since it is substantially similar to a method embodiment, the description is relatively simple, with reference to the description of the method embodiment being made in part. The systems and system embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be disposed in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
It is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (8)

1. A method of identifying an obstacle, comprising:
Acquiring point cloud data in a view field of the point cloud acquisition equipment through the point cloud acquisition equipment;
clustering the point cloud data to obtain an obstacle point cloud cluster;
classifying the shape of the obstacle point cloud cluster by using a classifier;
Calculating a convex hull point set of the obstacle point cloud cluster and a minimum circumscribed rectangle of the convex hull point set, and taking the minimum circumscribed rectangle as an initial bounding box of the convex hull point set;
Correcting the direction of the initial bounding box, and taking the corrected initial bounding box as a target bounding box;
Calculating the course angle of the obstacle according to the classification result of the target bounding box and the classifier;
correcting the direction of the initial bounding box, including:
determining reference points of the convex hull point set;
Determining a set of direction vectors of the initial bounding box based on the reference point, wherein the set of direction vectors comprises a plurality of direction vectors;
calculating two-dimensional median points of the convex hull point set;
calculating a main vector of an initial bounding box by taking the two-dimensional median point as a reference point;
Calculating a rotation angle based on the main vector and the course angle of the initial bounding box;
correcting the angle of the initial bounding box based on the rotation angle to obtain a target bounding box;
calculating the course angle of the obstacle according to the target bounding box and the classification result of the classifier, wherein the method comprises the following steps:
When the classification result of the classifier indicates that the type of the point cloud cluster is L-shaped, the calculated course angle of the long side of the target bounding box is directly used as the course angle of the obstacle;
And when the classification result of the classifier indicates that the type of the point cloud cluster is I type, selecting the calculated short side direction of the target bounding box as the course angle of the obstacle.
2. The obstacle identification method of claim 1, wherein the reference point comprises: and the maximum value point and the minimum value point are two points with the largest distances in the convex hull point set, wherein the point far away from the point cloud acquisition equipment in the two points is the maximum value point, and the point near to the point cloud acquisition equipment in the two points is the minimum value point.
3. The obstacle recognition method according to claim 2, wherein the reference points further include corner points, and the corner point p 3 is based on a formulaCalculating to obtain;
The distance from any convex hull point p in the convex hull point set to a straight line formed by connecting a maximum value point p h and a minimum value point p l is shown as d (l pl,ph, p), the foot pl,ph (p) is a perpendicular foot point projected on a straight line formed by connecting a maximum value point p h and a minimum value point p l by any convex hull point p in the convex hull point set, lambda is a preset coefficient, ||footpl,ph(p)-Pe||2=min(footpl,ph(p)-ph,footpl,ph(p)-pl), is shown as CH (p) and p epsilon CH (p) is shown as any convex hull point in the convex hull point set.
4. The obstacle recognition method of claim 3, wherein computing the two-dimensional median point of the set of convex hull points comprises:
Calculating p median=median(ω·p)pmedian based on a formula;
Judging whether the value of i is greater than 1;
If the value of i is less than 1, ω k is calculated based on the formula ω k=||pmedian-pk |, k=1, 2,3 … … n;
Substituting the calculated ω k into the formula p median =mean (ω·p), updating p median, controlling the value of i to increase by a preset step size, and continuing to perform the actions: judging whether the value of i is greater than 1;
When the value of i is greater than 1, outputting p median as a two-dimensional median point of the convex hull point set;
Wherein ω is a weight coefficient corresponding to each convex hull point, and p k, =1, 2,3, … … n represents each convex hull point in the convex hull point set.
5. The obstacle recognition method of claim 4, wherein determining the set of direction vectors of the initial bounding box based on the reference point comprises:
Based on the formula And calculating to obtain a main vector l of the initial bounding box.
6. The obstacle recognition method according to claim 5, wherein calculating a rotation angle based on the principal vector and a heading angle of the initial bounding box includes:
Based on the formula And calculating a rotation angle delta alpha, wherein the beta is a course angle of the initial bounding box, wherein l y represents a projection length of a main vector l on a y axis in a coordinate system, l x represents a projection length of the main vector l on an x axis in the coordinate system, and the coordinate system is a coordinate system established by taking a centroid point of an obstacle point cloud cluster as an origin.
7. An obstacle recognition device, characterized by comprising:
the acquisition unit is used for acquiring point cloud data in the view field of the point cloud acquisition equipment through the point cloud acquisition equipment;
The classification unit is used for clustering the point cloud data to obtain obstacle point cloud clusters;
The classifier is used for classifying the shape of the obstacle point cloud cluster;
The course angle identification unit is used for calculating a convex hull point set of the obstacle point cloud cluster and a minimum circumscribed rectangle of the convex hull point set, and taking the minimum circumscribed rectangle as an initial bounding box of the convex hull point set; correcting the direction of the initial bounding box, and taking the corrected initial bounding box as a target bounding box; calculating the course angle of the obstacle according to the classification result of the target bounding box and the classifier;
The course angle identification unit is specifically configured to:
determining reference points of the convex hull point set;
Determining a set of direction vectors of the initial bounding box based on the reference point, wherein the set of direction vectors comprises a plurality of direction vectors;
calculating two-dimensional median points of the convex hull point set;
calculating a main vector of an initial bounding box by taking the two-dimensional median point as a reference point;
Calculating a rotation angle based on the main vector and the course angle of the initial bounding box;
correcting the angle of the initial bounding box based on the rotation angle to obtain a target bounding box;
When the classification result of the classifier indicates that the type of the point cloud cluster is L-shaped, the calculated course angle of the long side of the target bounding box is directly used as the course angle of the obstacle;
And when the classification result of the classifier indicates that the type of the point cloud cluster is I type, selecting the calculated short side direction of the target bounding box as the course angle of the obstacle.
8. A vehicle control system, characterized by comprising the obstacle identifying apparatus according to claim 7.
CN202110779716.6A 2021-07-09 2021-07-09 Obstacle recognition method and device and vehicle control system Active CN113609914B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110779716.6A CN113609914B (en) 2021-07-09 2021-07-09 Obstacle recognition method and device and vehicle control system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110779716.6A CN113609914B (en) 2021-07-09 2021-07-09 Obstacle recognition method and device and vehicle control system

Publications (2)

Publication Number Publication Date
CN113609914A CN113609914A (en) 2021-11-05
CN113609914B true CN113609914B (en) 2024-05-10

Family

ID=78304363

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110779716.6A Active CN113609914B (en) 2021-07-09 2021-07-09 Obstacle recognition method and device and vehicle control system

Country Status (1)

Country Link
CN (1) CN113609914B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115598614A (en) * 2022-11-28 2023-01-13 南京隼眼电子科技有限公司(Cn) Three-dimensional point cloud target detection method and device and storage medium
CN116772887B (en) * 2023-08-25 2023-11-14 北京斯年智驾科技有限公司 Vehicle course initialization method, system, device and readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111381249A (en) * 2020-03-30 2020-07-07 北京经纬恒润科技有限公司 Method and device for calculating course angle of obstacle
CN111578894A (en) * 2020-06-02 2020-08-25 北京经纬恒润科技有限公司 Method and device for determining heading angle of obstacle
WO2021026705A1 (en) * 2019-08-09 2021-02-18 华为技术有限公司 Matching relationship determination method, re-projection error calculation method and related apparatus
CN112622923A (en) * 2019-09-24 2021-04-09 北京百度网讯科技有限公司 Method and device for controlling a vehicle
WO2021134441A1 (en) * 2019-12-31 2021-07-08 深圳元戎启行科技有限公司 Automated driving-based vehicle speed control method and apparatus, and computer device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109901567B (en) * 2017-12-08 2020-06-23 百度在线网络技术(北京)有限公司 Method and apparatus for outputting obstacle information
CN110147706B (en) * 2018-10-24 2022-04-12 腾讯科技(深圳)有限公司 Obstacle recognition method and device, storage medium, and electronic device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021026705A1 (en) * 2019-08-09 2021-02-18 华为技术有限公司 Matching relationship determination method, re-projection error calculation method and related apparatus
CN112622923A (en) * 2019-09-24 2021-04-09 北京百度网讯科技有限公司 Method and device for controlling a vehicle
WO2021134441A1 (en) * 2019-12-31 2021-07-08 深圳元戎启行科技有限公司 Automated driving-based vehicle speed control method and apparatus, and computer device
CN111381249A (en) * 2020-03-30 2020-07-07 北京经纬恒润科技有限公司 Method and device for calculating course angle of obstacle
CN111578894A (en) * 2020-06-02 2020-08-25 北京经纬恒润科技有限公司 Method and device for determining heading angle of obstacle

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
一种用于道路障碍物识别的激光点云聚类算法;张名芳;刘新雨;付锐;蒋拯民;李星星;;激光与红外(第09期);全文 *
基于车载32线激光雷达点云的车辆目标识别算法;孔栋;王晓原;刘亚奇;陈晨;王方;;科学技术与工程(第05期);全文 *
矿井环境基于稀疏激光雷达数据的动态物体检测与追踪;孙祥峻;刘志刚;;工业控制计算机(第07期);全文 *

Also Published As

Publication number Publication date
CN113609914A (en) 2021-11-05

Similar Documents

Publication Publication Date Title
CN109188459B (en) Ramp small obstacle identification method based on multi-line laser radar
US10217005B2 (en) Method, apparatus and device for generating target detection information
CN113609914B (en) Obstacle recognition method and device and vehicle control system
US10515259B2 (en) Method and system for determining 3D object poses and landmark points using surface patches
US8254644B2 (en) Method, apparatus, and program for detecting facial characteristic points
CN111381249B (en) Method and device for calculating course angle of obstacle
Liu et al. Dynamic vehicle detection with sparse point clouds based on PE-CPD
CN105678241B (en) A kind of cascade two dimensional image face pose estimation
CN114612665A (en) Pose estimation and dynamic vehicle detection method based on normal vector histogram features
CN108985375B (en) Multi-feature fusion tracking method considering particle weight spatial distribution
Li et al. Dl-slam: Direct 2.5 d lidar slam for autonomous driving
Nielsen Robust lidar-based localization in underground mines
CN117629215A (en) Chassis charging pile-returning method based on single-line laser radar point cloud registration
CN112712062A (en) Monocular three-dimensional object detection method and device based on decoupling truncated object
CN116310317A (en) Irregular large target point cloud cutting method for fitting point cloud target bounding boxes
CN116385997A (en) Vehicle-mounted obstacle accurate sensing method, system and storage medium
CN113807442B (en) Target shape and course estimation method and system
Lee et al. Vehicle tracking iterative by Kalman-based constrained multiple-kernel and 3-D model-based localization
CN115147471A (en) Laser point cloud automatic registration method based on curvature density characteristics
US20230154133A1 (en) Key point correction system, key point correction method, and nontransitory computer readable medium
Coenen et al. Recovering the 3d pose and shape of vehicles from stereo images
Van Leeuwen Motion estimation and interpretation for in-car systems
CN110907928B (en) Target object position determining method and device, electronic equipment and storage medium
CN116823944A (en) Vehicle pose estimation method integrating road side sparse point cloud and road features
CN115147612B (en) Processing method for estimating vehicle size in real time based on accumulated point cloud

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant