CN116206286A - Obstacle detection method, device, equipment and medium under high-speed road condition - Google Patents

Obstacle detection method, device, equipment and medium under high-speed road condition Download PDF

Info

Publication number
CN116206286A
CN116206286A CN202310434626.2A CN202310434626A CN116206286A CN 116206286 A CN116206286 A CN 116206286A CN 202310434626 A CN202310434626 A CN 202310434626A CN 116206286 A CN116206286 A CN 116206286A
Authority
CN
China
Prior art keywords
point cloud
cloud data
obstacle
classifier
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310434626.2A
Other languages
Chinese (zh)
Inventor
余汪江
赖晗
李兴涛
张建平
邵书竹
夏坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Navinfo Co Ltd
Original Assignee
Navinfo Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Navinfo Co Ltd filed Critical Navinfo Co Ltd
Priority to CN202310434626.2A priority Critical patent/CN116206286A/en
Publication of CN116206286A publication Critical patent/CN116206286A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/817Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level by voting
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The embodiment of the specification discloses a method, a device, equipment and a medium for detecting an obstacle under high-speed road conditions. The scheme may include: acquiring point cloud data acquired by a sensor of a vehicle on an expressway; filtering the point cloud data to obtain obstacle point cloud data; clustering the obstacle point cloud data to obtain sub point cloud data corresponding to a single obstacle; determining a feature vector of the sub-point cloud data; inputting the feature vector into a preset joint classifier to obtain a classification result output by the joint classifier; the joint classifier comprises at least one classifier; and determining the type of the obstacle represented by the sub-point cloud data based on the classification result output by the joint classifier. Therefore, the obstacle detection and classification method is low in cost and high in efficiency and is suitable for high-speed road conditions.

Description

Obstacle detection method, device, equipment and medium under high-speed road condition
Technical Field
The present disclosure relates to the field of electronic maps, and in particular, to a method, an apparatus, a device, and a computer readable medium for detecting an obstacle under a high-speed road condition.
Background
Under high-speed road conditions, the driving environment is relatively simple, and the running speed of the vehicle is high. Different from the obstacle recognition under the road condition of the common highway, the obstacle recognition under the road condition of the high speed has higher requirements on the efficiency.
Currently, obstacle detection based on laser point cloud is mainly divided into detection schemes based on deep learning and detection schemes based on traditional methods. The point cloud detection scheme based on deep learning is high in cost, and with the increase of the data volume of the point cloud, the running time of the model is long, so that the requirement of high efficiency cannot be met. The current point cloud detection scheme based on the traditional method cannot obtain classification information of the obstacle.
In view of this, it is desirable to provide a low-cost, high-efficiency obstacle detection and classification method suitable for high-speed road conditions.
Disclosure of Invention
The embodiments of the present disclosure provide a method, apparatus, device and computer readable medium for detecting an obstacle under a high-speed road condition, so as to provide a low-cost and high-efficiency method for detecting and classifying an obstacle applicable to a high-speed road condition.
In order to solve the above technical problems, the embodiments of the present specification are implemented as follows:
the method for detecting the obstacle under the high-speed road condition provided by the embodiment of the specification comprises the following steps:
Acquiring point cloud data acquired by a sensor of a vehicle on an expressway;
filtering the point cloud data to obtain obstacle point cloud data;
clustering the obstacle point cloud data to obtain sub point cloud data corresponding to a single obstacle;
determining a feature vector of the sub-point cloud data;
inputting the feature vector into a preset joint classifier to obtain a classification result output by the joint classifier; the joint classifier comprises at least one classifier;
and determining the type of the obstacle represented by the sub-point cloud data based on the classification result output by the joint classifier.
The embodiment of the present disclosure provides an obstacle detection device under high-speed road conditions, including:
the point cloud data acquisition module is used for acquiring point cloud data acquired by a sensor of a vehicle on an expressway;
the point cloud data filtering module is used for filtering the point cloud data to obtain obstacle point cloud data;
the point cloud data clustering module is used for clustering the obstacle point cloud data to obtain sub point cloud data corresponding to a single obstacle;
the point cloud feature extraction module is used for determining feature vectors of the sub-point cloud data;
The classifier module is used for inputting the feature vector into a preset joint classifier to obtain a classification result output by the joint classifier; the joint classifier comprises at least one classifier;
and the obstacle type determining module is used for determining the obstacle type represented by the sub-point cloud data based on the classification result output by the joint classifier.
A computer device provided in an embodiment of the present disclosure includes a memory, a processor, and a computer program stored on the memory, where the processor executes the computer program to implement the following method:
acquiring point cloud data acquired by a sensor of a vehicle on an expressway;
filtering the point cloud data to obtain obstacle point cloud data;
clustering the obstacle point cloud data to obtain sub point cloud data corresponding to a single obstacle;
determining a feature vector of the sub-point cloud data;
inputting the feature vector into a preset joint classifier to obtain a classification result output by the joint classifier; the joint classifier comprises at least one classifier;
and determining the type of the obstacle represented by the sub-point cloud data based on the classification result output by the joint classifier.
A computer readable storage medium provided in an embodiment of the present specification, on which a computer program/instruction is stored, is characterized in that the computer program/instruction when executed by a processor implements the following method:
acquiring point cloud data acquired by a sensor of a vehicle on an expressway;
filtering the point cloud data to obtain obstacle point cloud data;
clustering the obstacle point cloud data to obtain sub point cloud data corresponding to a single obstacle;
determining a feature vector of the sub-point cloud data;
inputting the feature vector into a preset joint classifier to obtain a classification result output by the joint classifier; the joint classifier comprises at least one classifier;
and determining the type of the obstacle represented by the sub-point cloud data based on the classification result output by the joint classifier.
A computer program product provided in an embodiment of the present disclosure includes a computer program/instruction, wherein the computer program/instruction when executed by a processor implements the method of:
acquiring point cloud data acquired by a sensor of a vehicle on an expressway;
filtering the point cloud data to obtain obstacle point cloud data;
Clustering the obstacle point cloud data to obtain sub point cloud data corresponding to a single obstacle;
determining a feature vector of the sub-point cloud data;
inputting the feature vector into a preset joint classifier to obtain a classification result output by the joint classifier; the joint classifier comprises at least one classifier;
and determining the type of the obstacle represented by the sub-point cloud data based on the classification result output by the joint classifier.
One embodiment of the present disclosure can achieve at least the following advantages: after point cloud data are filtered and segmented to obtain sub-point cloud data corresponding to a single obstacle, the sub-point cloud data are identified by using a preset joint classifier comprising at least one classifier, and the type of the obstacle is determined, so that the classifier is used for replacing a depth network, the cost is low, the efficiency is high, the obstacle point cloud segmented from the point cloud data is classified, the background is ignored due to focusing of the obstacle in the classification process, the classification prediction is not sensitive to the change of scene information, and the accuracy and the robustness of the obstacle identification are high.
Drawings
In order to more clearly illustrate the embodiments of the present description or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present application, and that other drawings may be obtained according to these drawings without inventive effort to a person skilled in the art.
Fig. 1 is a flow chart of an obstacle detection method under high-speed road conditions according to an embodiment of the present disclosure;
fig. 2 is a schematic flow chart of a scheme for filtering to obtain obstacle point cloud data in an actual application scenario provided in an embodiment of the present disclosure;
fig. 3 is a schematic flow chart of a building process of the AdaBoost classifier in an actual application scenario provided in the embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an obstacle detecting apparatus corresponding to fig. 1 under a high-speed road condition according to an embodiment of the present disclosure.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of one or more embodiments of the present specification more clear, the technical solutions of one or more embodiments of the present specification will be clearly and completely described below in connection with specific embodiments of the present specification and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present specification. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without undue burden, are intended to be within the scope of one or more embodiments herein.
It should be understood that although the terms first, second, third, etc. may be used in this application to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another.
The following describes in detail the technical solutions provided by the embodiments of the present specification with reference to the accompanying drawings.
Compared with the common highway road condition, the high-speed road condition has relatively simple driving environment and higher vehicle running speed. Different from the obstacle recognition under the road condition of the common highway, the obstacle recognition under the road condition of the high speed has higher requirements on the efficiency.
Currently, obstacle detection based on laser point cloud is mainly divided into detection schemes based on deep learning and detection schemes based on traditional methods.
According to the data-driven deep learning-based point cloud detection scheme, firstly, the running time of a model is long, even if the model is deployed by using a TensorRT framework, the running time of the model is about 60 milliseconds under an embedded system, and along with the expansion of a sensing range and the number of point clouds, the running time of the model is longer, so that higher efficiency requirements cannot be met. Secondly, mass data are required for model training, so that a great deal of manpower and material resource consumption is brought; furthermore, model prediction results are not highly accurate when a scene with greater variability from previous model training data is encountered, or when the point cloud data source under test is different from the training data source.
Based on the point cloud detection scheme of the traditional method, the extraction of the drivable area of the road and the filtering result of the ground point cloud are unstable, misdetection of the ground point cloud and emergency braking in the unmanned vehicle driving process can be caused by improper processing, and the traditional clustering segmentation algorithm does not obtain the classification information of dynamic obstacles.
In view of this, in the embodiments of the present specification, a low-cost, high-efficiency dynamic obstacle detection, classification method is provided. The detection and classification time is effectively controlled within a short time (for example, within 34 milliseconds) while the low computational effort cost, the hardware cost and the high algorithm robustness are ensured, and the high-speed detection efficiency requirement is met. When the category of the obstacle is predicted, the classifier is used for replacing a depth detection network, the segmented obstacle is directly classified, so that more obstacles are focused in the classification process, background information is ignored, the category prediction is not sensitive to the change of scene information, and the robustness is improved.
Fig. 1 is a flow chart of an obstacle detection method under high-speed road conditions according to an embodiment of the present disclosure.
It is understood that the method may be performed by any apparatus, device, platform, cluster of devices having computing, processing capabilities.
As shown in fig. 1, the process may include the steps of:
step 102: and acquiring point cloud data acquired by a sensor of a vehicle on an expressway.
Point cloud data (point cloud data) refers to a set of vectors in a three-dimensional coordinate system. Three-dimensional spatial coordinate information (i.e., x-axis coordinates, y-axis coordinates, and z-axis coordinates) may be included in the point cloud data. Optionally, the point cloud data may further include reflection intensity information. Optionally, color information, echo frequency information, and the like may also be included in the point cloud data.
The point cloud data may specifically be lidar point cloud data (laser point cloud data). The sensor for acquiring the point cloud data may be a LiDAR (LiDAR). In actual use, a laser radar (LiDAR) may include a laser ranging system, an optomechanical scanning unit, a control recording unit, a global positioning system (Global Position System, GPS), an inertial measurement unit (Inertial Measurement Unit, IMU), and an imaging device, among others.
The obstacle detection scheme provided in the embodiments of the present disclosure may be applied to high-speed road conditions. The sensor for collecting point cloud data may be loaded on a vehicle traveling on an expressway. Therefore, the point cloud data collected by the sensor is the point cloud data under the high-speed road condition.
In practical application, the scheme shown in fig. 1 in the embodiment of the present disclosure may be applied in the running process of the vehicle, where the sensor on the vehicle collects the point cloud data in real time, and identifies the obstacle based on the collected point cloud data. For example, the technical scheme shown in fig. 1 may be performed once every preset time period. For example, the preset duration may be 100ms or the like.
In the embodiments of the present specification, since the collection and subsequent processing (including filtering, clustering, model recognition, etc.) of the point cloud data may be performed in real time, the point cloud data may be referred to as dynamic point cloud data, the obstacle may be referred to as a dynamic obstacle, the obstacle point cloud data may be referred to as dynamic obstacle point cloud data, etc.
In embodiments of the present description, the vehicle may specifically include an unmanned vehicle. The result of the obstacle recognition may be used by an autopilot module of the unmanned vehicle to implement an autopilot function.
Step 104: and filtering the point cloud data to obtain obstacle point cloud data.
In practical application, the point cloud data collected by the vehicle sensor can comprise obstacle point cloud data corresponding to obstacles influencing the driving of the vehicle; non-obstacle point cloud data that does not affect the driving of the vehicle may also be included. In the embodiment of the present specification, the non-obstacle point cloud data may include, as an example, ground point cloud data, out-of-lane point cloud data, and the like, without being limited thereto.
In order to identify an obstacle more accurately, in the embodiment of the present specification, the object of obstacle detection may specifically be to identify an obstacle affecting driving of the vehicle based on obstacle point cloud data. For example, such obstacles affecting vehicle driving may include vehicles, bicycles, pedestrians, other obstacles (e.g., road blocks, railings), etc. located on the vehicle's travelable path.
Therefore, before the point cloud data is input to the classifier for identification, the point cloud data may be filtered in step 104, the non-obstacle point cloud data therein may be filtered, and the obstacle point cloud data therein may be reserved for subsequent computation.
Step 106: and clustering the obstacle point cloud data to obtain sub point cloud data corresponding to a single obstacle.
In practical application, a machine learning clustering method applied to point cloud data clustering in the prior art can be adopted to obtain sub-point cloud data corresponding to a single obstacle. The machine learning clustering methods may include, for example, KMeans clustering, DBSCAN clustering, OPTICS clustering, spectral clustering (Spectral Clustering, SC), hierarchical clustering (Hierarchical Clustering), mean-shift (Mean-shift), BIRCH clustering, neighbor propagation algorithm, affinity propagation algorithm (Affinity Propagation), or the like.
Optionally, the obstacle point cloud data may be segmented to obtain a point cloud data grid; then, the point cloud data grids of the neighborhood may be combined by, for example, a region growing method, thereby obtaining sub-point cloud data corresponding to a single obstacle. In the process of clustering the obstacle point cloud data, information such as the distance, azimuth angle and the like of each obstacle from the vehicle can be calculated, so that each obstacle can be distinguished conveniently.
In the embodiment of the specification, the sub-point cloud data corresponding to the single obstacle is obtained by filtering and clustering and dividing the point cloud data, so that the subsequent classification process focuses on the obstacle to ignore the background information, the prediction of the obstacle category is not sensitive to the change of the scene information, and the robustness of obstacle detection and category identification can be improved.
Step 108: and determining the characteristic vector of the sub-point cloud data.
In the embodiment of the present specification, a classifier may be built in advance, and then, in the driving process of the vehicle, the point cloud data collected by the vehicle may be identified using the pre-built classifier to determine the obstacle.
Specifically, when the point cloud data is identified by using a preset classifier, a feature vector corresponding to the point cloud data needs to be determined, and then the feature vector is input into the preset classifier to identify the type of the obstacle.
Alternatively, the feature vector may be determined based on coordinate information in the point cloud data. In practical application, the coordinate information may reflect the shape information of the obstacle corresponding to the point cloud data.
Alternatively, the feature vector may be determined based on reflection intensity information in the point cloud data. In practical application, the reflection intensity information may reflect material information of the obstacle corresponding to the point cloud data.
Alternatively, the feature vector may also be determined based on other information in the point cloud data, for example, may be determined based on color information or the like.
Step 110: inputting the feature vector into a preset joint classifier to obtain a classification result output by the joint classifier; the joint classifier includes at least one classifier.
In embodiments of the present description, a preset joint classifier may be used to identify feature vectors. The joint classifier may include a classifier corresponding to the number of obstacle types to be identified and identifying the corresponding obstacle types.
For example, if the types of the obstacles to be identified include three types of a type, B type and C type, the joint classifier may include 3 classifiers, namely a classifier a for identifying whether the obstacle is a type obstacle, a classifier B for identifying whether the obstacle is B type obstacle and a classifier C for identifying whether the obstacle is C type obstacle.
When the method is applied specifically, the step of inputting the feature vector into a preset joint classifier specifically may include: the feature vector is input to the at least one classifier.
For example, the feature vector may be input to the classifier a to obtain a classification result of the classifier a, where the classification result is used to indicate whether the sub-point cloud data corresponding to the feature vector indicates a class a obstacle; the feature vector is input into a classifier B to obtain a classification result of the classifier B, and the classification result is used for indicating whether the sub-point cloud data corresponding to the feature vector indicates a class B obstacle or not; and inputting the feature vector into a classifier C to obtain a classification result of the classifier C, wherein the classification result is used for indicating whether the sub-point cloud data corresponding to the feature vector indicates a C-type obstacle.
Step 112: and determining the type of the obstacle represented by the sub-point cloud data based on the classification result output by the joint classifier.
In the embodiment of the present specification, each two classifiers in the joint classifier is used to distinguish the current type from other types. For example, a two classifier a is used to distinguish between a class a obstacle and other obstacles (i.e., a class b obstacle and a class c obstacle); as another example, classifier B is used to distinguish between class B obstacles and other obstacles (i.e., class a obstacles and class c obstacles); for another example, classifier C is used to distinguish between a class C obstacle and other obstacles (i.e., a class a obstacle and a class b obstacle).
In practical application, determining the number of votes corresponding to each preset obstacle type according to the classification result output by each two classifiers in the combined classifier; and then determining the preset obstacle type with the highest voting number as the obstacle type corresponding to the sub-point cloud data.
Along with the above example, the preset joint classifier includes a bi-classifier a, a bi-classifier B and a bi-classifier C. It is assumed that, for a certain feature vector input to the joint classifier, the results output by the two classifiers a are no (i.e., votes B and C), the results shown by the two classifiers B are yes (i.e., votes B), and the results output by the two classifiers C are no (i.e., votes a and B), whereby, from the voting results of the respective two classifiers, it is possible to determine that the type of obstacle represented by the sub-point cloud data corresponding to the feature vector is B.
In the embodiment of the present specification, a classifier is used to identify feature vectors corresponding to point cloud data instead of a depth detection network. The method can ensure low calculation cost and low hardware cost, can effectively control the detection and classification time in a short time (for example, within 34 milliseconds), and can meet the detection requirement under high-speed road conditions.
It should be understood that, in the method described in one or more embodiments of the present disclosure, the order of some steps may be adjusted according to actual needs, or some steps may be omitted.
In the method in fig. 1, after point cloud data are filtered and segmented to obtain sub-point cloud data corresponding to a single obstacle, a preset joint classifier comprising at least one classifier is used for identifying the sub-point cloud data, and the type of the obstacle is determined, so that the classifier is used for replacing a depth network, the cost is low, the efficiency is high, the obstacle point cloud segmented from the point cloud data is classified, the obstacle is focused and the background is ignored in the classifying process, the classification prediction is not sensitive to the change of scene information, and the accuracy and the robustness of the obstacle identification are high.
Based on the method of fig. 1, the examples of the present specification also provide some specific implementations of the method, as described below.
In the embodiments of the present specification, a point cloud filtering scheme is proposed, which makes the filtering result more robust, and is described in detail below.
Hereinafter, the filtered coordinate points may be used to represent coordinate points in the point cloud data that need to be filtered out, or, in other words, coordinate points that are identified as not belonging to an obstacle (i.e., non-obstacle point cloud data). The filter coordinate points may include a first type filter coordinate point, a second type filter coordinate point, and the like, which are described below.
In an aspect, filtering the point cloud data in step 104 may specifically include filtering a first type of filtering coordinate points, where the first type of filtering coordinate points may include ground point cloud data and noise data covered in the ground.
Specifically, the filtering the point cloud data to obtain obstacle point cloud data may include: and determining coordinate points of which the height difference of the first predicted ground height corresponding to the predicted ground plane is smaller than a second preset threshold value as first-class filtering coordinate points.
More specifically, low-level point cloud data with z-axis coordinates smaller than a first preset threshold value can be determined from the point cloud data; fitting to obtain a predicted ground plane equation based on the low-point cloud data; identifying a first type of filtering coordinate point in the point cloud data according to the predicted ground plane equation; and filtering the first class of filtering coordinate points from the point cloud data to obtain obstacle point cloud data. Wherein the first preset threshold may be set according to experience or experimental results, for example, may be set to 2 meters or the like.
The identifying the first type of filtering coordinate point in the point cloud data according to the predicted ground plane equation may specifically include: for a first target coordinate point in the point cloud data, calculating a first predicted ground height corresponding to the first target coordinate point according to the predicted ground plane equation and x-axis coordinates and y-axis coordinates of the first target coordinate point, wherein the first predicted ground height is used for representing the theoretical height of the ground corresponding to the first target coordinate point; subtracting the first predicted ground height from the z-axis coordinate value of the first target coordinate point to obtain a first height difference; judging whether the first height difference value is smaller than a second preset threshold value or not to obtain a first judgment result; and if the first judgment result shows that the first height difference value is smaller than a second preset threshold value, determining the first target coordinate point as a first type filtering coordinate point. The x axis, the y axis and the z axis can refer to coordinate axes in a Cartesian space coordinate system, the x axis and the y axis are positioned on a horizontal plane, and the z axis is in the plumb line direction. Wherein the second preset threshold may be set according to experience or experimental results, for example, may be set to 0.2 meter or the like.
In practical application, the method for determining the first type of filtering coordinate points can be determined by taking the point cloud block as a unit. Specifically, the point cloud data can be divided into point cloud blocks with a second preset size, and a plane is fitted to each point cloud block by using a random sampling consistency algorithm, so that the calculation efficiency is improved. For example, the size of the dot cloud may be 40m×40m.
In addition, optionally, on the basis of filtering the point cloud data by using the multi-plane model, in order to further improve the accuracy of filtering the point cloud data, the problem of sudden braking and the like in the driving process of the vehicle caused by unclean filtering of the point cloud data is avoided, and secondary filtering can be further performed on the ground point cloud.
Specifically, the filtering the point cloud data to obtain obstacle point cloud data may further include: and determining the coordinate point with the height difference from the second predicted ground height smaller than a third preset threshold value as a first type of filtering coordinate point.
More specifically, the point cloud data may be divided into a plurality of point cloud data grids of a preset size; for a target grid in the plurality of point cloud data grids, determining a second predicted ground height corresponding to the target grid based on coordinate points in the target grid and coordinate points in adjacent grids of the target grid; identifying a first type of filtering coordinate point in the point cloud data according to the second prediction ground height; and filtering the first class of filtering coordinate points from the point cloud data to obtain obstacle point cloud data.
Wherein the determining the second predicted ground height corresponding to the target grid based on the coordinate points in the target grid and the coordinate points in the adjacent grids of the target grid may determine, for example, the z-axis coordinates of the lowest coordinate point in the target grid and the adjacent grids of the target grid as the second predicted ground height corresponding to the target grid.
The identifying the first type of filtering coordinate point in the point cloud data according to the second predicted ground height specifically may include: for a second target coordinate point in the target grid, subtracting the second predicted ground height from the second target coordinate point to obtain a second height difference; judging whether the second height difference value is smaller than a third preset threshold value or not to obtain a second judging result; and if the second judgment result shows that the second height difference value is smaller than a third preset threshold value, determining the second target coordinate point as a first type filtering coordinate point. Wherein the third preset threshold may be set according to experience or experimental results, for example, may be set to 0.1 meter or the like.
In practical application, when the first type of filtering coordinate points are determined, a grid map of the point cloud can be established, so that the point cloud can be conveniently inquired, and the algorithm efficiency is improved. In the secondary filtering scheme, the second predicted ground height corresponding to the target grid is determined based on the adjacent grids, so that a data smoothing effect is achieved, filtering of grid data of the roof and the like which do not contain ground point clouds can be effectively avoided, and accuracy of ground point cloud data filtering is improved.
In addition, in consideration of the fact that in the high-speed road condition, noise point clouds such as leaf points and the like can be filtered according to the ground height design threshold value, so that clean dynamic obstacle point cloud data can be further extracted.
In an embodiment of the present disclosure, another aspect of filtering the point cloud data in step 104 may specifically include filtering a second type of filtering coordinate point, where the second type of filtering coordinate point may include out-road point cloud data.
Specifically, the filtering the point cloud data to obtain obstacle point cloud data may include: acquiring a lane center point from high-precision map data; determining coordinate points, in the point cloud data, with the distance from the center point of the lane being greater than a fourth preset threshold value as second class filtering coordinate points, wherein the second class filtering coordinate points comprise coordinate points located outside the expressway; and filtering the second class of filtering coordinate points from the point cloud data to obtain obstacle point cloud data.
In the above embodiment, the fourth preset threshold may be determined according to the road width, specifically may be half the road width, or the like. In an autopilot scenario, the road width may be a lane width in practical applications, since an unmanned vehicle typically continues to travel on a predetermined lane. In practical application, the method can traverse the lane center point of the high-precision map, and reserve the grids with the distance from the center point smaller than the width of the road to obtain grid data in the road.
According to the above embodiment, the process of filtering the point cloud data to obtain the obstacle point cloud data may specifically include: and filtering out the first class filtering coordinate points and the second class filtering coordinate points from the point cloud data, thereby obtaining obstacle point cloud data.
In the above embodiment of the present disclosure, a point cloud filtering scheme fused with a high-precision map, a multi-plane fitting model, and a grid filtering algorithm is provided, so that the filtering result is more robust, and further, the obstacle recognition result is more robust.
According to the above description, in the actual application scenario provided in the embodiment of the present disclosure, the flow chart of the scheme for filtering to obtain the obstacle point cloud data is shown in fig. 2.
As shown in fig. 2, filtering the point cloud on the ground and outside the lane, extracting dynamic obstacle point cloud data may include the following steps 201 to 205.
Step 201, a grid map of the point cloud is established. In practical application, the grid map is built, so that the query of the point cloud can be facilitated, and the algorithm efficiency is improved.
Step 202, dividing the point cloud into a plurality of 40 m-40 m point cloud blocks, and fitting a plane by each point cloud block by using a random sampling consistency algorithm to obtain a ground height estimated value of each point cloud block. The ground point cloud data is filtered for the first time through the estimated value of the multi-plane model, and the noise data covered under the ground is filtered. Step 201 is used to perform primary filtering on the ground point cloud.
And 203, traversing the lane center point of the high-precision map, and reserving a grid with a distance smaller than the width of the lane from the lane center point to obtain grid data in the road.
In step 204, in order to avoid sudden braking in the automatic driving process due to the fact that the point cloud data cannot be completely filtered in certain scenes on the basis of step 202, in step 204, the minimum value of the Z coordinates of the 9 neighborhood grids of each grid can be counted as the ground height of the current grid to carry out secondary filtering. Through 9 neighborhood smoothing, the filtering of raster data of the roof and the like which do not contain ground point clouds can be effectively avoided, and point cloud data of dynamic obstacles are obtained.
And 205, filtering noise point clouds such as leaf points according to the ground height, and extracting clean dynamic obstacle point cloud data.
It will be appreciated that step 201 may be performed at any time prior to step 203 and step 204, e.g., may be performed after step 202. The order of execution of steps 203 and 204 may be interchanged.
In embodiments of the present description, a classifier may be utilized to identify and classify the obstacle point cloud. In practical application, the design of the feature vector input to the classifier is the key of the design of the classification algorithm, and the design of the good feature vector is the basis for realizing effective classification.
In one aspect, in the embodiments of the present specification, the obstacle may be identified based on the appearance characteristics of the obstacle, or, in other words, the obstacle may be identified based on shape information carried in the point cloud data. In practical application, the shape information carried in the point cloud data may be determined based on coordinate information of the point cloud. Alternatively, since the obstacle can freely rotate in the x-y plane, the angle between the normal vector of the obstacle point cloud and the z axis can be mainly considered, and vector components in the x and y directions can be ignored for reflecting the appearance characteristics of the obstacle.
Specifically, the determining the feature vector of the sub-point cloud data may specifically include: obtaining normal vectors of all coordinate points in the sub-point cloud data, wherein the normal vectors of all coordinate points in the sub-point cloud data are determined based on coordinate points in a preset range around the coordinate points; and then, determining first characteristic information corresponding to the sub-point cloud data according to the included angle value of the normal vector and the vertical direction of each coordinate point. The predetermined range may be a distance value, for example, a virtual plane may be determined based on all coordinate points within a predetermined radius around the target coordinate point as a center, and then a normal vector of the virtual plane may be determined as a normal vector of the target coordinate point.
Optionally, determining the first feature information corresponding to the sub-point cloud data according to the angle value between the normal vector and the vertical direction of each coordinate point may specifically include: dividing the sub-point cloud data into a first preset number of point cloud data fragments in the vertical direction; determining a fragmentation included angle value of each point cloud data fragment according to the included angle value of the normal vector and the vertical direction of each coordinate point in each point cloud data fragment; and determining first sub-characteristic information of the sub-point cloud data according to the fragment included angle value of the fragment of the point cloud data.
In practical application, for example, the obstacle point cloud may be equally divided into a plurality of parts (for example, 8 parts) according to the Z-axis direction (i.e., in the Z-axis direction), and an inner product of an average normal vector of each part of the point cloud and a (0, 1) unit normal vector is calculated to obtain a plurality of corresponding (for example, 8) feature values.
Optionally, determining the first feature information corresponding to the sub-point cloud data according to the angle value between the normal vector and the vertical direction of each coordinate point may specifically include: dividing the sub-point cloud data into a second preset number of point cloud data groups according to a preset angle threshold based on the included angle value between the normal vector and the vertical direction of each coordinate point; determining a grouping included angle value corresponding to each point cloud data grouping according to the included angle value between the normal vector and the vertical direction of each coordinate point in each point cloud data grouping; and determining second sub-characteristic information of the sub-point cloud data according to the grouping included angle value of the point cloud data grouping.
In practical application, for example, interval statistics can be made on the normal vector of all point clouds of a single obstacle and the included angle information of the Z axis, and each statistical interval corresponds to an included angle statistical value. For example, 8 intervals may be divided based on angle information of normal vectors and Z axes of all point clouds, thereby obtaining 8 feature statistics.
In the above embodiment, the first feature information may include at least one of the first sub-feature information and the second sub-feature information. Alternatively, the first sub-feature information and the second sub-feature information may constitute the first feature information.
On the other hand, in the embodiment of the present specification, the obstacle may be identified based on the material characteristics of the obstacle, or, alternatively, the obstacle may be identified based on the reflection intensity information carried in the point cloud data.
Specifically, the determining the feature vector of the sub-point cloud data may specifically include: obtaining reflection intensity information of each coordinate point in the sub-point cloud data; and determining second characteristic information of the sub-point cloud data according to the reflection intensity information of each coordinate point.
In practical application, for example, reflection intensity information corresponding to the sub-point cloud data can be counted to obtain 1 corresponding characteristic value.
In one or more embodiments of the present specification, the feature vector of a single obstacle may include at least one of the first feature information and the second feature information. Alternatively, the first characteristic information and the second characteristic information may constitute the characteristic vector of a single obstacle.
The first characteristic information of the single obstacle may be determined based on a normal vector of each coordinate point in the sub-point cloud data of the single obstacle, and more specifically, may be determined based on an included angle between the normal vector of each coordinate point in the sub-point cloud data of the single obstacle and the vertical direction. The second characteristic information of the single obstacle may be determined based on reflection intensity information of each coordinate point in sub-point cloud data of the single obstacle.
In practical applications, the first characteristic information may reflect shape characteristic information of the single obstacle. The second characteristic information may reflect material characteristic information of the single obstacle. Thus, in one or more embodiments of the present description, the type of obstacle may be identified based on at least one of an appearance characteristic and a material characteristic of the obstacle.
In practical application, the above example is taken as an example, if the feature vector of the single obstacle is determined based on the first sub-feature information, the second sub-feature information, and the second feature information, and assuming that the first sub-feature information is 8 feature values, the second sub-feature information is 8 feature values, and the second feature information is 1 feature value, the feature vector of the single obstacle may be composed of 17 feature values. It is to be understood that the number and the manner of construction of the feature values contained in the feature vector are not limited to the examples given herein.
In the embodiment of the present disclosure, the classifier for identifying the obstacle point cloud may specifically include a classifier corresponding to the number of obstacle types to be identified and used for identifying the corresponding obstacle types. The classifier can be an AdaBoost classifier or a Support Vector Machine (SVM) and the like.
Before the feature vectors are identified in the foregoing step 110 using the preset joint classifier, the preset joint classifier is required, specifically, a classifier corresponding to each obstacle type to be identified needs to be trained in advance. Taking an AdaBoost classifier as an example, a construction process of two classifiers corresponding to a preset obstacle type is described below.
Specifically, the process of constructing the AdaBoost classifier may include the following steps.
Collecting point cloud data samples corresponding to preset obstacle types; the point cloud data samples carry tags for representing the preset obstacle types. In practical applications, the preset obstacle types may include vehicles, bicycles, pedestrians, other obstacles, and the like. Other obstacles may include, for example, rails, road blocks, and the like, among others.
And determining the feature vector corresponding to the point cloud data sample.
And initializing sample weights of the point cloud data samples.
Training a weak classifier: and inputting the feature vector corresponding to the point cloud data sample and the corresponding sample weight into a weak classifier for classification training, so that the prediction error rate of the current weak classifier is minimum.
Determining sample weights: and obtaining the sample weight of each point cloud data sample corresponding to the next weak classifier according to the training result of the current weak classifier.
Repeating the steps of training the weak classifiers and determining the sample weights until the preset iteration times are reached, and obtaining a preset number of weak classifiers and weights corresponding to the weak classifiers; the weights corresponding to the weak classifiers are determined based on the prediction error rates corresponding to the weak classifiers.
Based on each weak classifier and the weight corresponding to each weak classifier, obtaining a strong classifier corresponding to the preset obstacle type; the joint classifier comprises at least one strong classifier corresponding to a preset obstacle type. The strong classifier obtained in the step 7 may be used as a classifier for identifying the preset obstacle.
According to the above description, in the practical application scenario provided in the embodiment of the present disclosure, the flow chart of the building process of the AdaBoost classifier is shown in fig. 3.
As shown in fig. 3, the process of constructing an AdaBoost classifier may include the following steps 301 to 305.
Step 301, initializing a first weak classification model, wherein each training sample is assigned the same weight, the weight formula is as follows (1),
Figure BDA0004192948830000131
/>
in the formula (1), D 1 Sample weight distribution, ω, representing the first weak classifier 1,i The weight value of the i-th training sample representing the first weak classifier, i representing the index value of the sample.
Step 302, calculating the error rate (e) m ) The error rate is calculated by the following formula (2),
Figure BDA0004192948830000132
in the formula (2), e m Representing the error rate, ω, of the m-th weak classifier model prediction m,i A weight value representing the ith sample of the mth weak classifier, G m Representing the weak classifier obtained by the mth iteration, x i Represents the i-th sample, y i And (3) representing the true value of the sample, wherein I represents a model prediction judging function, the prediction is that 1 is returned, and otherwise, 0 is returned.
Step 303, according to the error rate (e m ) The weight coefficient (alpha) of the mth classifier can be obtained m ) The calculation formula is as follows (3),
Figure BDA0004192948830000141
updating the weight value of the sample in the next iteration (D m+1 ) The calculation formula is as follows (4-6),
D m+1 =[ω m+1,1m+1,2 ,...,ω m+1,im+1,i+1 ,...,ω m+1,N ]
Figure BDA0004192948830000142
Figure BDA0004192948830000143
in the formula (4-6), ω m+1,i Representing the weight value, Z, of each sample during the (m+1) th iteration m Representing the normalization factor.
Step 304, repeating step 302 and step 303 until the iteration number reaches the upper threshold value limit, and obtaining a plurality of weak classifiers.
Step 305, weighting and summing the weak classifiers according to the weight coefficients of the classifiers to obtain a final strong classifier (F), determining a calculation formula of the strong classifier based on the weak classifier as follows (7),
Figure BDA0004192948830000144
according to one or more embodiments of the present disclosure, at least the following technical effects can be achieved:
(1) The method for detecting and classifying the dynamic obstacle is low in cost and high in efficiency under the condition of high-speed road conditions: under relatively simple road conditions such as high speed, a set of obstacle detection and classification system is built by using a traditional algorithm, and a stable algorithm effect is realized without deep learning, so that the cost is low and the efficiency is high;
(2) A point cloud filtering scheme and a dynamic obstacle extraction method which are integrated with a high-precision map, a multi-plane fitting model and a grid filtering algorithm are provided: in the ground point cloud filtering process, high-precision map information, a multi-plane fitting model and grid filtering results are comprehensively considered, so that the filtering results are more robust;
(3) The dynamic obstacle classification method based on the AdaBoost classifier and normal vector feature distribution is provided: when the category of the obstacle is predicted, the classifier is used for replacing a depth detection network, the segmented obstacle is directly classified, so that the classification process focuses on the obstacle, background information is ignored, the category prediction is not sensitive to the change of scene information, and the robustness is higher.
Based on the same thought, the embodiment of the specification also provides a device corresponding to the method.
Fig. 4 is a schematic structural diagram of an obstacle detecting apparatus corresponding to fig. 1 under a high-speed road condition according to an embodiment of the present disclosure.
As shown in fig. 4, the apparatus may include:
the point cloud data acquisition module 402 is configured to acquire point cloud data acquired by a sensor of a vehicle on an expressway;
the point cloud data filtering module 404 is configured to filter the point cloud data to obtain obstacle point cloud data;
The point cloud data clustering module 406 is configured to cluster the obstacle point cloud data to obtain sub point cloud data corresponding to a single obstacle;
a point cloud feature extraction module 408, configured to determine feature vectors of the sub-point cloud data;
the classifier module 410 is configured to input the feature vector to a preset joint classifier, so as to obtain a classification result output by the joint classifier; the joint classifier comprises at least one classifier;
and the obstacle type determining module 412 is configured to determine the obstacle type represented by the sub-point cloud data based on the classification result output by the joint classifier.
The present description example also provides some specific embodiments of the device based on the device of fig. 4, which is described below.
Optionally, the point cloud data filtering module 404 may specifically include: the low-level point cloud data determining unit is used for determining low-level point cloud data with z-axis coordinates smaller than a first preset threshold value from the point cloud data; the plane fitting unit is used for fitting to obtain a predicted ground plane equation based on the low-level cloud data; the first identification unit is used for identifying first-type filtering coordinate points in the point cloud data according to the predicted ground plane equation; and the filtering unit is used for filtering the first class of filtering coordinate points from the point cloud data to obtain obstacle point cloud data.
In practical application, the first identifying unit may be specifically configured to: for a first target coordinate point in the point cloud data, calculating a first predicted ground height corresponding to the first target coordinate point according to the predicted ground plane equation and the x-axis coordinate and the y-axis coordinate of the first target coordinate point; subtracting the first predicted ground height from the z-axis coordinate value of the first target coordinate point to obtain a first height difference; judging whether the first height difference value is smaller than a second preset threshold value or not to obtain a first judgment result; and if the first judgment result shows that the first height difference value is smaller than a second preset threshold value, determining the first target coordinate point as a first type filtering coordinate point.
Optionally, the point cloud data filtering module 404 may specifically further include: the grid dividing unit is used for dividing the point cloud data into a plurality of point cloud data grids with preset sizes; a second predicted ground height determining unit configured to determine, for a target grid of the plurality of point cloud data grids, a second predicted ground height corresponding to the target grid based on coordinate points in the target grid and coordinate points in neighboring grids of the target grid; and the second identification unit is used for identifying the first type of filtering coordinate points in the point cloud data according to the second predicted ground height.
In practical application, the second identifying unit may be specifically configured to: for a second target coordinate point in the target grid, subtracting the second predicted ground height from the second target coordinate point to obtain a second height difference; judging whether the second height difference value is smaller than a third preset threshold value or not to obtain a second judging result; and if the second judgment result shows that the second height difference value is smaller than a third preset threshold value, determining the second target coordinate point as a first type filtering coordinate point.
Optionally, the point cloud data filtering module 404 may specifically further include: the high-precision map data module unit is used for acquiring a lane center point from the high-precision map data; and the third identification unit is used for determining coordinate points with the distance from the center point of the lane in the point cloud data being greater than a fourth preset threshold value as second class filtering coordinate points, wherein the second class filtering coordinate points comprise coordinate points positioned outside the expressway. And the filtering unit is also used for filtering the second class of filtering coordinate points from the point cloud data to obtain obstacle point cloud data.
Optionally, the point cloud feature extraction module 408 may specifically include: a normal vector obtaining unit, configured to obtain a normal vector of each coordinate point in the sub-point cloud data, where the normal vector of each coordinate point in the sub-point cloud data is determined based on coordinate points in a preset range around the coordinate point; and the first characteristic information determining unit is used for determining first characteristic information corresponding to the sub-point cloud data according to the included angle value between the normal vector and the vertical direction of each coordinate point.
In practical application, the first characteristic information determining unit may specifically be configured to: dividing the sub-point cloud data into a first preset number of point cloud data fragments in the vertical direction; determining a fragmentation included angle value of each point cloud data fragment according to the included angle value of the normal vector and the vertical direction of each coordinate point in each point cloud data fragment; and determining first sub-characteristic information of the sub-point cloud data according to the fragment included angle value of the fragment of the point cloud data.
In practical application, the first characteristic information determining unit may specifically be configured to: dividing the sub-point cloud data into a second preset number of point cloud data groups according to a preset angle threshold based on the included angle value between the normal vector and the vertical direction of each coordinate point; determining a grouping included angle value corresponding to each point cloud data grouping according to the included angle value between the normal vector and the vertical direction of each coordinate point in each point cloud data grouping; and determining second sub-characteristic information of the sub-point cloud data according to the grouping included angle value of the point cloud data grouping.
Optionally, the point cloud feature extraction module 408 may specifically further include: the reflected intensity information acquisition unit is used for acquiring the reflected intensity information of each coordinate point in the sub-point cloud data; and the second characteristic information determining unit is used for determining second characteristic information of the sub-point cloud data according to the reflection intensity information of each coordinate point.
Optionally, the obstacle type determining module 412 may specifically be configured to: determining the number of votes corresponding to each preset obstacle type according to classification results output by each classification in the joint classifier; and determining the preset obstacle type with the highest voting number as the obstacle type corresponding to the sub-point cloud data.
Optionally, the obstacle detection device under high-speed road conditions may further include a classifier building module, configured to perform the following steps:
collecting point cloud data samples corresponding to preset obstacle types; the point cloud data samples carry labels for representing the preset obstacle types;
determining a feature vector corresponding to the point cloud data sample;
initializing sample weights of the point cloud data samples;
training a weak classifier: inputting the feature vector corresponding to the point cloud data sample and the corresponding sample weight into a weak classifier for classification training, so that the prediction error rate of the current weak classifier is minimum;
determining sample weights: according to the training result of the current weak classifier, obtaining the sample weight of each point cloud data sample corresponding to the next weak classifier;
repeating the steps of training the weak classifiers and determining the sample weights until the preset iteration times are reached, and obtaining a preset number of weak classifiers and weights corresponding to the weak classifiers; the weight corresponding to each weak classifier is determined based on the prediction error rate corresponding to each weak classifier;
Based on each weak classifier and the weight corresponding to each weak classifier, obtaining a strong classifier corresponding to the preset obstacle type; the joint classifier comprises at least one strong classifier corresponding to a preset obstacle type.
It will be appreciated that each of the modules described above refers to a computer program or program segment for performing one or more particular functions. Furthermore, the distinction of the above-described modules does not represent that the actual program code must also be separate.
Based on the same idea, the embodiment of the present disclosure further provides an apparatus corresponding to the above method, including a memory, a processor, and a computer program stored on the memory, where the processor executes the computer program to implement the following method:
acquiring point cloud data acquired by a sensor of a vehicle on an expressway;
filtering the point cloud data to obtain obstacle point cloud data;
clustering the obstacle point cloud data to obtain sub point cloud data corresponding to a single obstacle;
determining a feature vector of the sub-point cloud data;
inputting the feature vector into a preset joint classifier to obtain a classification result output by the joint classifier; the joint classifier comprises at least one classifier;
And determining the type of the obstacle represented by the sub-point cloud data based on the classification result output by the joint classifier.
Based on the same thought, the embodiment of the specification also provides a computer readable medium corresponding to the method. A computer readable medium having stored thereon a computer program/instructions executable by a processor to perform the method of:
acquiring point cloud data acquired by a sensor of a vehicle on an expressway;
filtering the point cloud data to obtain obstacle point cloud data;
clustering the obstacle point cloud data to obtain sub point cloud data corresponding to a single obstacle;
determining a feature vector of the sub-point cloud data;
inputting the feature vector into a preset joint classifier to obtain a classification result output by the joint classifier; the joint classifier comprises at least one classifier;
and determining the type of the obstacle represented by the sub-point cloud data based on the classification result output by the joint classifier.
Based on the same idea, the embodiments of the present disclosure further provide a computer program product corresponding to the above method, including a computer program/instruction, where the computer program/instruction when executed by a processor implements the following method:
Acquiring point cloud data acquired by a sensor of a vehicle on an expressway;
filtering the point cloud data to obtain obstacle point cloud data;
clustering the obstacle point cloud data to obtain sub point cloud data corresponding to a single obstacle;
determining a feature vector of the sub-point cloud data;
inputting the feature vector into a preset joint classifier to obtain a classification result output by the joint classifier; the joint classifier comprises at least one classifier;
and determining the type of the obstacle represented by the sub-point cloud data based on the classification result output by the joint classifier.
The foregoing describes particular embodiments of the present disclosure, and in some cases, acts or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are referred to each other.
The apparatus, the device, and the method provided in the embodiments of the present disclosure correspond to each other, and therefore, the apparatus, the device, and the method also have similar beneficial technical effects as those of the corresponding method, and since the beneficial technical effects of the method have been described in detail above, the beneficial technical effects of the corresponding apparatus, device are not described here again.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each element may be implemented in one or more software and/or hardware elements when implemented in the present application.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and changes may be made to the present application by those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. which are within the spirit and principles of the present application are intended to be included within the scope of the claims of the present application.

Claims (15)

1. The obstacle detection method under the high-speed road condition is characterized by comprising the following steps of:
acquiring point cloud data acquired by a sensor of a vehicle on an expressway;
filtering the point cloud data to obtain obstacle point cloud data;
Clustering the obstacle point cloud data to obtain sub point cloud data corresponding to a single obstacle;
determining a feature vector of the sub-point cloud data;
inputting the feature vector into a preset joint classifier to obtain a classification result output by the joint classifier; the joint classifier comprises at least one classifier;
and determining the type of the obstacle represented by the sub-point cloud data based on the classification result output by the joint classifier.
2. The method of claim 1, wherein the filtering the point cloud data to obtain obstacle point cloud data specifically comprises:
determining low-point cloud data with z-axis coordinates smaller than a first preset threshold value from the point cloud data;
fitting to obtain a predicted ground plane equation based on the low-point cloud data;
identifying a first type of filtering coordinate point in the point cloud data according to the predicted ground plane equation;
and filtering the first class of filtering coordinate points from the point cloud data to obtain obstacle point cloud data.
3. The method of claim 2, wherein the identifying the first type of filtered coordinate points in the point cloud data according to the predicted ground plane equation, specifically comprises:
For a first target coordinate point in the point cloud data, calculating a first predicted ground height corresponding to the first target coordinate point according to the predicted ground plane equation and the x-axis coordinate and the y-axis coordinate of the first target coordinate point;
subtracting the first predicted ground height from the z-axis coordinate value of the first target coordinate point to obtain a first height difference;
judging whether the first height difference value is smaller than a second preset threshold value or not to obtain a first judgment result;
and if the first judgment result shows that the first height difference value is smaller than a second preset threshold value, determining the first target coordinate point as a first type filtering coordinate point.
4. The method of claim 2, wherein the filtering the point cloud data to obtain obstacle point cloud data specifically comprises:
dividing the point cloud data into a plurality of point cloud data grids with preset sizes;
for a target grid in the plurality of point cloud data grids, determining a second predicted ground height corresponding to the target grid based on coordinate points in the target grid and coordinate points in adjacent grids of the target grid;
and identifying first-class filtering coordinate points in the point cloud data according to the second predicted ground height.
5. The method of claim 4, wherein the identifying the first type of filtered coordinate points in the point cloud data based on the second predicted ground level comprises:
for a second target coordinate point in the target grid, subtracting the second predicted ground height from the second target coordinate point to obtain a second height difference;
judging whether the second height difference value is smaller than a third preset threshold value or not to obtain a second judging result;
and if the second judgment result shows that the second height difference value is smaller than a third preset threshold value, determining the second target coordinate point as a first type filtering coordinate point.
6. The method according to any one of claims 1 to 5, wherein the filtering the point cloud data to obtain obstacle point cloud data specifically includes:
acquiring a lane center point from high-precision map data;
determining coordinate points, in the point cloud data, with the distance from the center point of the lane being greater than a fourth preset threshold value as second class filtering coordinate points; the second class of filtering coordinate points comprise coordinate points positioned outside the expressway;
and filtering the second class of filtering coordinate points from the point cloud data to obtain obstacle point cloud data.
7. The method of claim 1, wherein the determining the feature vector of the sub-point cloud data specifically comprises:
acquiring normal vectors of all coordinate points in the sub-point cloud data; the normal vector of each coordinate point in the sub-point cloud data is determined based on coordinate points in a preset range around the coordinate point;
and determining first characteristic information corresponding to the sub-point cloud data according to the included angle value between the normal vector and the vertical direction of each coordinate point.
8. The method of claim 7, wherein the determining the first characteristic information corresponding to the sub-point cloud data according to the angle value between the normal vector and the vertical direction of each coordinate point specifically includes:
dividing the sub-point cloud data into a first preset number of point cloud data fragments in the vertical direction;
determining a fragmentation included angle value of each point cloud data fragment according to the included angle value of the normal vector and the vertical direction of each coordinate point in each point cloud data fragment;
and determining first sub-characteristic information of the sub-point cloud data according to the fragment included angle value of the fragment of the point cloud data.
9. The method of claim 7, wherein the determining the first characteristic information corresponding to the sub-point cloud data according to the angle value between the normal vector and the vertical direction of each coordinate point specifically includes:
Dividing the sub-point cloud data into a second preset number of point cloud data groups according to a preset angle threshold based on the included angle value between the normal vector and the vertical direction of each coordinate point;
determining a grouping included angle value corresponding to each point cloud data grouping according to the included angle value between the normal vector and the vertical direction of each coordinate point in each point cloud data grouping;
and determining second sub-characteristic information of the sub-point cloud data according to the grouping included angle value of the point cloud data grouping.
10. The method according to any one of claims 7 to 9, wherein the determining the feature vector of the sub-point cloud data specifically comprises:
obtaining reflection intensity information of each coordinate point in the sub-point cloud data;
and determining second characteristic information of the sub-point cloud data according to the reflection intensity information of each coordinate point.
11. The method of claim 1, wherein the determining the type of obstacle represented by the sub-point cloud data based on the classification result output by the joint classifier, specifically comprises:
determining the number of votes corresponding to each preset obstacle type according to classification results output by each classification in the joint classifier;
And determining the preset obstacle type with the highest voting number as the obstacle type corresponding to the sub-point cloud data.
12. The method of claim 1, wherein before inputting the feature vector to a preset joint classifier to obtain the classification result output by the joint classifier, further comprises:
collecting point cloud data samples corresponding to preset obstacle types; the point cloud data samples carry labels for representing the preset obstacle types;
determining a feature vector corresponding to the point cloud data sample;
initializing sample weights of the point cloud data samples;
training a weak classifier: inputting the feature vector corresponding to the point cloud data sample and the corresponding sample weight into a weak classifier for classification training, so that the prediction error rate of the current weak classifier is minimum;
determining sample weights: according to the training result of the current weak classifier, obtaining the sample weight of each point cloud data sample corresponding to the next weak classifier;
repeating the steps of training the weak classifiers and determining the sample weights until the preset iteration times are reached, and obtaining a preset number of weak classifiers and weights corresponding to the weak classifiers; the weight corresponding to each weak classifier is determined based on the prediction error rate corresponding to each weak classifier;
Based on each weak classifier and the weight corresponding to each weak classifier, obtaining a strong classifier corresponding to the preset obstacle type; the joint classifier comprises at least one strong classifier corresponding to a preset obstacle type.
13. An obstacle detection device under high-speed road conditions, the device comprising:
the point cloud data acquisition module is used for acquiring point cloud data acquired by a sensor of a vehicle on an expressway;
the point cloud data filtering module is used for filtering the point cloud data to obtain obstacle point cloud data;
the point cloud data clustering module is used for clustering the obstacle point cloud data to obtain sub point cloud data corresponding to a single obstacle;
the point cloud feature extraction module is used for determining feature vectors of the sub-point cloud data;
the classifier module is used for inputting the feature vector into a preset joint classifier to obtain a classification result output by the joint classifier; the joint classifier comprises at least one classifier;
and the obstacle type determining module is used for determining the obstacle type represented by the sub-point cloud data based on the classification result output by the joint classifier.
14. A computer device comprising a memory, a processor and a computer program stored on the memory, characterized in that the processor executes the computer program to carry out the steps of the method of claims 1 to 12.
15. A computer readable storage medium/computer program product having stored thereon a computer program/instructions, which when executed by a processor, realizes the steps of the method according to claims 1 to 12.
CN202310434626.2A 2023-04-21 2023-04-21 Obstacle detection method, device, equipment and medium under high-speed road condition Pending CN116206286A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310434626.2A CN116206286A (en) 2023-04-21 2023-04-21 Obstacle detection method, device, equipment and medium under high-speed road condition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310434626.2A CN116206286A (en) 2023-04-21 2023-04-21 Obstacle detection method, device, equipment and medium under high-speed road condition

Publications (1)

Publication Number Publication Date
CN116206286A true CN116206286A (en) 2023-06-02

Family

ID=86509684

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310434626.2A Pending CN116206286A (en) 2023-04-21 2023-04-21 Obstacle detection method, device, equipment and medium under high-speed road condition

Country Status (1)

Country Link
CN (1) CN116206286A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116524472A (en) * 2023-06-30 2023-08-01 广汽埃安新能源汽车股份有限公司 Obstacle detection method, device, storage medium and equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116524472A (en) * 2023-06-30 2023-08-01 广汽埃安新能源汽车股份有限公司 Obstacle detection method, device, storage medium and equipment
CN116524472B (en) * 2023-06-30 2023-09-22 广汽埃安新能源汽车股份有限公司 Obstacle detection method, device, storage medium and equipment

Similar Documents

Publication Publication Date Title
CN108345822B (en) Point cloud data processing method and device
KR102198724B1 (en) Method and apparatus for processing point cloud data
WO2018068653A1 (en) Point cloud data processing method and apparatus, and storage medium
CN106570454B (en) Pedestrian traffic parameter extracting method based on mobile laser scanning
KR100201739B1 (en) Method for observing an object, apparatus for observing an object using said method, apparatus for measuring traffic flow and apparatus for observing a parking lot
CN110674705B (en) Small-sized obstacle detection method and device based on multi-line laser radar
CN105892471A (en) Automatic automobile driving method and device
JP6621445B2 (en) Feature extraction device, object detection device, method, and program
CN102076531A (en) Vehicle clear path detection
CN102208013A (en) Scene matching reference data generation system and position measurement system
CN110705385B (en) Method, device, equipment and medium for detecting angle of obstacle
CN114488194A (en) Method for detecting and identifying targets under structured road of intelligent driving vehicle
CN113989784A (en) Road scene type identification method and system based on vehicle-mounted laser point cloud
CN114454875A (en) Urban road automatic parking method and system based on reinforcement learning
CN111928860A (en) Autonomous vehicle active positioning method based on three-dimensional curved surface positioning capability
Liu et al. Deep-learning and depth-map based approach for detection and 3-D localization of small traffic signs
CN116206286A (en) Obstacle detection method, device, equipment and medium under high-speed road condition
CN115205803A (en) Automatic driving environment sensing method, medium and vehicle
CN110824495B (en) Laser radar-based drosophila visual inspired three-dimensional moving target detection method
CN113008296A (en) Method and vehicle control unit for detecting a vehicle environment by fusing sensor data on a point cloud plane
Gálai et al. Crossmodal point cloud registration in the Hough space for mobile laser scanning data
CN112578673B (en) Perception decision and tracking control method for multi-sensor fusion of formula-free racing car
Börcs et al. A model-based approach for fast vehicle detection in continuously streamed urban LIDAR point clouds
CN115083199A (en) Parking space information determination method and related equipment thereof
CN110348407A (en) One kind is parked position detecting method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination