CN113076922A - Object detection method and device - Google Patents
Object detection method and device Download PDFInfo
- Publication number
- CN113076922A CN113076922A CN202110430291.8A CN202110430291A CN113076922A CN 113076922 A CN113076922 A CN 113076922A CN 202110430291 A CN202110430291 A CN 202110430291A CN 113076922 A CN113076922 A CN 113076922A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- voxel
- cloud cluster
- target
- cluster
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 114
- 238000000034 method Methods 0.000 claims description 28
- 238000000605 extraction Methods 0.000 claims description 13
- 239000013598 vector Substances 0.000 claims description 11
- 238000012216 screening Methods 0.000 claims description 9
- 238000012549 training Methods 0.000 claims description 5
- 238000013528 artificial neural network Methods 0.000 claims description 4
- 238000012545 processing Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000010276 construction Methods 0.000 description 2
- 238000000638 solvent extraction Methods 0.000 description 2
- 238000007635 classification algorithm Methods 0.000 description 1
- 238000007621 cluster analysis Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23211—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with adaptive number of clusters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/584—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Biomedical Technology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Probability & Statistics with Applications (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention provides an object detection method and device, which are applied to the technical field of automobiles. After the target number of detection objects included in the point cloud cluster is determined according to the voxels corresponding to the point cloud cluster, the voxels corresponding to the point cloud cluster are clustered until the voxel clusters with the target number are obtained, so that the laser points in the same voxel cluster correspond to the same detection object.
Description
Technical Field
The invention belongs to the technical field of automobiles, and particularly relates to an object detection method and device.
Background
In the field of driving assistance and intelligent driving, vehicle-mounted laser radars are widely used as a main means for detecting obstacles around a vehicle. In each detection period, the vehicle-mounted laser radar feeds back point clouds consisting of a plurality of laser points, and the vehicle-mounted controller completes the detection of obstacles around the vehicle by analyzing the point clouds.
Specifically, after the point cloud fed back by the laser radar is obtained, the vehicle-mounted controller usually performs cluster analysis on the laser radar to obtain one or more point cloud clusters, considers that one point cloud cluster corresponds to one object, and further classifies the obtained point cloud clusters by applying a point cloud classification algorithm to determine the category of the object corresponding to each point cloud cluster.
However, in practical applications, the driving environment of the vehicle is very complex, objects around the vehicle and distances between the objects are constantly changing, and the existing object detection method often has the problems of under-clustering and over-clustering when identifying the objects, wherein the under-clustering refers to identifying a large object, such as a truck, as two independent objects; by over-clustering, two problems with small distances, such as pedestrians and cars, are identified as one object. Obviously, the existing object detection method has low detection accuracy and affects the safe running of the vehicle.
Disclosure of Invention
In view of the above, the present invention aims to provide an object detection method and device, which solve the problems of over-clustering and under-clustering existing in the prior art, improve the accuracy of object detection, and contribute to improving the safety of a trip, and the specific scheme is as follows:
in a first aspect, the present invention provides an object detection method, comprising:
acquiring point clouds to be processed;
clustering laser points in the point cloud to be processed by taking a first preset distance threshold value as an upper limit value of a laser point interval to obtain at least one point cloud cluster;
the point cloud cluster comprises at least one detection object, and the first preset distance threshold is set based on the maximum distance which possibly occurs between different parts of the same detection object;
constructing a directed surrounding frame of the point cloud cluster, and dividing a space corresponding to the directed surrounding frame into a plurality of voxels with preset specifications;
determining the target number of detection objects included in the point cloud cluster according to the voxels corresponding to the point cloud cluster;
and clustering the voxels corresponding to the point cloud clusters until the target number of voxel clusters are obtained, so that the laser points in the same voxel cluster correspond to the same detection object.
Optionally, the determining, according to the voxel corresponding to the point cloud cluster, the target number of the detection objects included in the point cloud cluster includes:
acquiring voxel characteristics of each voxel corresponding to the point cloud cluster, wherein the voxel characteristics are represented by characteristic vectors with preset dimensions;
and determining the target number of the detection objects included in the point cloud cluster according to the voxel characteristics of each voxel corresponding to the point cloud cluster.
Optionally, the obtaining of the voxel characteristics of each voxel corresponding to the point cloud cluster includes:
respectively taking each voxel corresponding to the point cloud cluster as a target voxel;
inputting laser points included in the target voxel into a pre-trained feature extraction model to obtain voxel features of the target voxel;
the characteristic extraction model is obtained by training a neural network by taking a laser point included by a voxel as input and taking a characteristic vector of a preset dimension as output.
Optionally, the determining, according to the voxel characteristics of each voxel corresponding to the point cloud cluster, the target number of detection objects included in the point cloud cluster includes:
acquiring a voxel standard feature set comprising a plurality of voxel standard features;
respectively calculating Euclidean distances between the voxel characteristics of the voxels and the standard characteristics of the voxels according to each voxel corresponding to the point cloud cluster;
taking the voxel characteristic of which the Euclidean distance is smaller than a second preset distance threshold value as a target voxel characteristic;
and determining the target number of the detection objects included in the point cloud cluster according to the number of the target voxel characteristics.
Optionally, the voxel standard feature set includes a plurality of feature subsets;
determining a target number of detection objects included in the point cloud cluster according to the number of the target voxel features includes:
respectively counting the number of target voxel characteristics corresponding to the voxel standard characteristics in each characteristic subset;
for each feature subset, taking the maximum value of the number of target voxel features corresponding to each voxel standard feature in the feature subset as the number of detection objects corresponding to the feature subset;
and taking the sum of the number of the detection objects corresponding to each characteristic subset as the target number of the detection objects included in the point cloud cluster.
Optionally, one feature subset corresponds to one object type, and the method further includes:
and determining the object type corresponding to the detected object according to the characteristic subset to which the voxel standard characteristic corresponding to the target voxel characteristic belongs.
Optionally, the process of obtaining the voxel standard feature set includes:
acquiring a sample point cloud of a sample object;
constructing a directed bounding box of the sample point cloud;
dividing a space corresponding to the directed surrounding frame of the sample point cloud into a plurality of sample voxels with the preset specification;
respectively inputting laser points included in the sample voxels into the feature extraction model to obtain corresponding candidate voxel standard features;
and screening the voxel standard characteristics meeting a preset screening rule from the candidate voxel standard characteristics to obtain the voxel standard characteristic set.
Optionally, the clustering the laser points in the point cloud to be processed with the first preset distance threshold as the upper limit value of the laser point interval to obtain at least one point cloud cluster includes:
executing the following operations until point cloud clusters to which all laser points in the point cloud to be processed belong are determined:
constructing an initial point cloud cluster comprising a target laser point, wherein the target laser point is any one of laser points of undetermined point cloud clusters in the point cloud cluster to be processed;
calculating Euclidean distances between the target laser point and the laser points outside the initial clustering cluster;
storing the laser points with the Euclidean distance smaller than a first preset distance threshold value to the initial point cloud cluster;
sequentially taking the laser points stored in the initial clustering cluster as target laser points;
and returning to the step of calculating the Euclidean distance between the target laser point and the laser point outside the initial cluster until the Euclidean distance between the laser point outside the initial cluster and any target laser point in the initial cluster is greater than or equal to the first preset distance threshold value, and obtaining a final point cloud cluster.
In a second aspect, the present invention provides an object detecting apparatus comprising:
the first acquisition unit is used for acquiring point clouds to be processed;
the first clustering unit is used for clustering laser points in the point cloud to be processed by taking a first preset distance threshold value as an upper limit value of the laser point interval to obtain at least one point cloud cluster;
the point cloud cluster comprises at least one detection object, and the first preset distance threshold is set based on the maximum distance which possibly occurs between different parts of the same detection object;
the dividing unit is used for constructing a directed surrounding frame of the point cloud cluster and dividing a space corresponding to the directed surrounding frame into a plurality of voxels with preset specifications;
the quantity determining unit is used for determining the target quantity of detection objects included in the point cloud cluster according to the voxels corresponding to the point cloud cluster;
and the second clustering unit is used for clustering the voxels corresponding to the point cloud clusters until the target number of voxel clusters are obtained, so that the laser points in the same voxel cluster correspond to the same detection object.
Optionally, the number determining unit, when determining the target number of the detection objects included in the point cloud cluster according to the voxels corresponding to the point cloud cluster, specifically includes:
acquiring voxel characteristics of each voxel corresponding to the point cloud cluster, wherein the voxel characteristics are represented by characteristic vectors with preset dimensions;
and determining the target number of the detection objects included in the point cloud cluster according to the voxel characteristics of each voxel corresponding to the point cloud cluster.
The object detection method provided by the invention can be used for clustering the point clouds to be processed twice after the point clouds to be processed are obtained, in the process of clustering the point cloud clusters for the first time, the first preset distance threshold value is set based on the maximum distance which can appear between different parts of the same detection object, the detection object with a larger appearance can be prevented from being divided into a plurality of individuals, the under-clustering problem is solved, then the second clustering is carried out on each point cloud cluster by taking a voxel as a basis, and on the premise of determining the target number of the detection objects included in each point cloud cluster, the target number is taken as an end condition for ending the clustering process, each detection object in the point cloud clusters can be ensured to be detected, and the over-clustering problem is solved, therefore, compared with the prior art, the method provided by the invention can be used for solving the over-clustering and under-clustering problems in the prior art, the accuracy of object detection is improved, and the stroke safety is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a flowchart of an object detection method according to an embodiment of the present invention;
fig. 2 is a block diagram of an object detection apparatus according to an embodiment of the present invention;
fig. 3 is a block diagram of another object detection apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The object detection method provided by the invention is applied to the vehicle-mounted controller which is arranged in the whole vehicle and used for assisting driving or automatic driving, of course, the method can also be applied to other vehicle-mounted controllers which need to detect and identify objects based on laser point cloud fed back by a laser radar in the whole vehicle, and under certain conditions, the method can also be applied to a server on a network side.
Referring to fig. 1, fig. 1 is a flowchart of an object detection method according to an embodiment of the present invention, where the flowchart of the object detection method according to the embodiment of the present invention may include:
s100, point clouds to be processed are obtained.
In practical application, the laser radar feeds back point clouds according to a fixed sampling period, the point clouds to be processed mentioned in the embodiment of the invention can be point clouds directly fed back by the laser radar or point clouds temporarily stored in a memory, and any point cloud needing object detection and identification can be taken as the point clouds to be processed mentioned in the embodiment.
S110, clustering laser points in the point cloud to be processed by taking the first preset distance threshold as an upper limit value of the laser point interval to obtain at least one point cloud cluster.
In the present embodiment, the first preset distance threshold is set based on the maximum distance that may occur between different parts of the same detection object. As can be seen by combining the definition of the first preset distance threshold, in different application scenarios, the selection of the first preset distance threshold is different, for example, in a high-speed automatic driving scenario, generally, objects are continuous rigid bodies, and the distance between the objects is relatively large, and the first preset distance threshold may be set to be 0.2 m to 1.5 m; in a port block scenario, the first predetermined distance threshold may be set to 0.5-3.0 meters in which case the truck is typically bulky and the components of the truck are widely spaced.
Optionally, after the point cloud to be processed is obtained, the following operations may be performed until point cloud clusters to which all laser points in the point cloud to be processed belong are determined:
constructing an initial point cloud cluster comprising a target laser point, wherein the target laser point is any one of laser points of undetermined point cloud clusters to which the target laser point belongs in the point cloud cluster to be processed;
calculating the Euclidean distance between the target laser point and the laser points outside the initial cluster;
storing the laser points with the Euclidean distance smaller than a first preset distance threshold value to an initial point cloud cluster;
sequentially taking the laser points stored in the initial clustering cluster as target laser points;
and returning to the step of calculating the Euclidean distance between the target laser point and the laser point outside the initial cluster until the Euclidean distance between the laser point outside the initial cluster and any target laser point in the initial cluster is greater than or equal to a first preset distance threshold value, and obtaining a final point cloud cluster.
Based on the method for partitioning the point cloud cluster, a more specific method for partitioning the point cloud cluster is given as follows:
and S1, representing the point cloud to be processed by a point cloud set P (P0, …, pi, … pn), and creating a Kd tree for the point cloud to be processed, wherein n is the number of laser points included in the point cloud to be processed.
Further, a list Q and a list C are preset, the initial point cloud cluster in the current clustering process is stored in the list Q, and the final point cloud cluster obtained after clustering is finished is stored in the list C.
According to the basic principle of the Kd tree, the Kd tree of the point cloud to be processed is constructed, the screening and judging times of all laser points in the point cloud cluster to be processed in the subsequent selection process can be effectively reduced, and the processing efficiency of the step on the point cloud cluster division is effectively improved. For the specific construction process of Kd tree, it can be realized by referring to the prior art, and the present invention is not limited to this.
S2, assuming that the initial states of all the laser points in the point cloud to be processed are marked as unprocessed states, executing the following steps for each laser point pi in the point cloud set P:
s21, judging the marking state of pi, if the marking state is an unprocessed state, adding pi into the list Q, and continuing to execute S22; if the point cloud is in the processed state, skipping pi and executing S21-S24 on the next point cloud;
s22, marking the state of pi as processed, searching m laser points nearest to pi by using a Kd tree neighbor method, calculating the distances from the m laser points to the pi, and adding the laser points which are less than the first preset distance threshold value and marked as unprocessed into a list Q; if the distance is larger than a first preset distance threshold value or the marking state is processed, skipping the laser point without any processing;
s23, returning to S22 to execute each unprocessed point in the list Q until each laser point in the list Q is marked as processed;
and S24, if the number of the laser points in the list Q is between [ A, B ], taking the current point cloud cluster in the list Q as the point cloud cluster obtained by final division, and storing the point cloud cluster in the list C. And setting the list Q as an empty set for the next operation.
Wherein, A represents the minimum laser point number corresponding to the object in the point cloud, and B represents the maximum laser point number corresponding to the object in the point cloud. In practical application, the values of a and B need to be selected in combination with a specific application scenario.
And S3, when the states of all the laser points in the point cloud to be processed are processed, the algorithm is terminated.
Based on the above, it can be seen that the step is the first clustering of the point clouds to be processed, and the general principle in the processing process is to ensure that the same object is not divided into two point cloud clusters.
It should be noted in advance that, through the processing of this step, the point cloud to be processed may be divided into a plurality of point cloud clusters, and based on this, the processing steps in subsequent S120 to S140 are required to be executed for each point cloud cluster, that is, the steps S120 to S140 are executed for each point cloud cluster, which is not separately described in the subsequent content.
S120, constructing a directed surrounding frame of the point cloud cluster, and dividing a space corresponding to the directed surrounding frame into a plurality of voxels with preset specifications.
After the point cloud cluster is obtained through the division in the previous steps, a directed surrounding frame of the point cloud cluster is constructed, then, the space corresponding to the obtained directed surrounding frame is divided according to the preset specification, and finally, a plurality of voxels with the preset specification can be obtained.
It should be noted that, the construction of the directed bounding box for the point cloud cluster and the division of the voxels may be implemented by referring to the prior art, which is not limited in the present invention. As for the preset specification, the data processing capability of the controller and the precision requirement of object detection can be combined for flexible selection.
S130, determining the target number of the detection objects included in the point cloud cluster according to the voxels corresponding to the point cloud cluster.
In order to determine the specific number of detection objects included in each point cloud cluster, the detection method provided by the embodiment of the invention presets a voxel standard feature set, the voxel standard feature set includes a plurality of voxel standard features, and the number of detection objects included in each point cloud cluster is determined based on the voxel standard features.
In the following, the process of acquiring the voxel standard feature set is described:
first, a sample point cloud of a sample object is obtained. The sample object mentioned in this embodiment refers to a known sample object that has been labeled. In actual operation, a truck, a car, a human body and other real objects can be used as sample objects, and then sample point clouds of the sample objects fed back by the laser radar are obtained. It is conceivable that, since the object detected by the lidar is known, what kind of sample object is specifically included in the obtained sample point cloud, and specifically several sample objects are all clearly known.
After the sample point cloud is obtained, a directed surrounding frame of the sample point cloud can be constructed, and a space corresponding to the directed surrounding frame of the sample point cloud is divided into a plurality of sample voxels with the preset specification. It should be noted that, in practical applications, the preset specification according to which the voxel is divided is consistent with the preset specification according to which the voxel standard feature is set, so as to ensure the validity of the voxel standard feature.
Then, the laser points included in each sample voxel are respectively input into a preset feature extraction model, and corresponding candidate voxel standard features are obtained. The feature extraction model provided by the embodiment of the invention is obtained by training a neural network by taking a laser point included by a voxel as input and taking a feature vector of a preset dimension as output. As for the specific training process of the feature extraction model, it can be implemented by referring to the prior art, and the present invention is not limited thereto.
Because the feature vectors of the sample objects of the same category in the same part are often very close, after the candidate voxel standard features of the sample objects are obtained, the voxel standard features meeting the preset screening rules need to be screened from the candidate voxel standard features, and a set formed by the screened voxel features is the voxel standard feature set. Because the voxel standard feature set needs to occupy a certain storage space, the preset screening rule can be selected according to parameters such as the storage space of the vehicle-mounted memory, the computing capability of the controller and the like, for example, the number of the voxel standard features can be limited, and the value mode of the similar voxel standard features can be limited.
Optionally, based on the provided voxel standard feature set, each voxel of each point cloud cluster corresponding to the point cloud to be processed is first used as a target voxel, and a laser point included in the target voxel is input into the pre-trained feature extraction model mentioned in the foregoing content, so as to obtain a voxel feature of each target voxel, that is, a feature vector of a preset dimension.
And then, determining the target number of the detection objects included in the point cloud cluster according to the voxel characteristics of each voxel corresponding to the point cloud cluster. Specifically, for each voxel corresponding to the point cloud cluster, the euclidean distance between the voxel characteristic of the voxel and the standard characteristic of each voxel is calculated, and the voxel characteristic of which the euclidean distance is smaller than a second preset distance threshold is used as the target voxel characteristic. The specific formula for calculating the Euclidean distance is as follows:
wherein d represents the Euclidean distance between the voxel characteristic and the voxel standard characteristic;
ei denotes the ith idiom feature;
vj represents the jth idiom standard feature;
l represents the length of the feature, for example, the length of the feature is 128, and then the feature is represented by a sequence of 128 numbers;
eik denotes the kth element value of the ei feature;
vjk denotes the k-th element value of the Vj standard feature.
After the target voxel features included in each point cloud cluster are determined, the target number of the detection objects included in the corresponding point cloud cluster can be determined according to the number of the target voxel features in each point cloud cluster.
Specifically, in the process of constructing the voxel standard feature set, different types of sample objects are selected, so that the voxel standard feature set can be further divided into a plurality of feature subsets according to the different types of the sample objects, and each feature subset corresponds to one object type. Based on the above, respectively counting the number of target voxel characteristics corresponding to each voxel standard characteristic in each characteristic subset; and aiming at each feature subset, taking the maximum value in the number of target voxel features corresponding to each voxel standard feature in the feature subset as the number of detection objects corresponding to the feature subset.
Then, the sum of the number of detection objects corresponding to each feature subset is used as the target number of detection objects included in the point cloud cluster.
And S140, clustering the voxels corresponding to the point cloud clusters until a target number of voxel clusters are obtained, so that the laser points in the same voxel cluster correspond to the same detection object.
After the target number of the detection objects included in the point cloud cluster is determined, the voxels corresponding to the point cloud cluster can be clustered until the voxel clusters with the target number are obtained, so that the laser points in the same voxel cluster correspond to the same detection object.
Through the steps, the multiple detection objects included in the point cloud cluster can be finally distinguished, and the problem of under-clustering in the prior art is solved.
It should be noted that the process of clustering the voxels in the point cloud cluster can be implemented by referring to a clustering method in the prior art, and details are not described here.
In summary, in the object detection method provided in the embodiments of the present invention, after the point cloud to be processed is obtained, the point cloud to be processed is clustered twice, and in the process of obtaining the point cloud cluster by first clustering, the first preset distance threshold is set based on the maximum distance that may occur between different parts of the same detection object, so that it is possible to prevent dividing the detection object with a large appearance into a plurality of individuals, to solve the under-clustering problem, and then, for each point cloud cluster, the second clustering is performed based on the voxels, and on the premise of determining the number of the detection objects included in each point cloud cluster, the number of the targets is used as the end condition for ending the clustering process, so as to ensure that each detection object in the point cloud cluster can be detected, and further solve the over-clustering problem, therefore, compared with the prior art, the method provided in the present invention can solve the over-clustering and under-clustering problems in the prior art, the accuracy of object detection is improved, and the stroke safety is improved.
Further, as described above, each feature subset in the voxel standard feature set corresponds to one object class, and while the number of targets of the detection objects included in the point cloud cluster is determined in the foregoing step, the type of the object corresponding to the detection object may also be determined according to the feature subset to which the voxel standard feature corresponding to the target voxel feature belongs, for example, it is specifically determined whether the detection object is a vehicle or a pedestrian.
In the following, the object detection apparatus provided in the embodiment of the present invention is introduced, and the object detection apparatus described below may be regarded as a functional module architecture that needs to be set in the central device to implement the object detection method provided in the embodiment of the present invention; the following description may be cross-referenced with the above.
Optionally, referring to fig. 2, fig. 2 is a block diagram of a structure of an object detection apparatus according to an embodiment of the present invention, where the object detection apparatus according to the embodiment includes:
a first acquiring unit 10, configured to acquire a point cloud to be processed;
the first clustering unit 20 is configured to cluster the laser points in the point cloud to be processed by using a first preset distance threshold as an upper limit value of the laser point interval, so as to obtain at least one point cloud cluster;
the point cloud cluster comprises at least one detection object, and the first preset distance threshold is set based on the maximum distance which possibly occurs between different parts of the same detection object;
the dividing unit 30 is configured to construct a directed bounding box of the point cloud cluster, and divide a space corresponding to the directed bounding box into a plurality of voxels with preset specifications;
a quantity determining unit 40, configured to determine, according to the voxels corresponding to the point cloud cluster, a target quantity of detection objects included in the point cloud cluster;
and the second clustering unit 50 is used for clustering the voxels corresponding to the point cloud clusters until a target number of voxel clusters are obtained, so that the laser points in the same voxel cluster correspond to the same detection object.
Optionally, the quantity determining unit 40 is configured to, when determining the target quantity of the detection object included in the point cloud cluster according to the voxel corresponding to the point cloud cluster, specifically include:
acquiring voxel characteristics of each voxel corresponding to the point cloud cluster, wherein the voxel characteristics are represented by characteristic vectors of preset dimensions;
and determining the target number of the detection objects included in the point cloud cluster according to the voxel characteristics of each voxel corresponding to the point cloud cluster.
Optionally, the quantity determining unit 40 is configured to obtain a voxel characteristic of each voxel corresponding to the point cloud cluster, and specifically includes:
respectively taking each voxel corresponding to the point cloud cluster as a target voxel;
inputting laser points included in the target voxel into a pre-trained feature extraction model to obtain the voxel feature of the target voxel;
the characteristic extraction model is obtained by training a neural network by taking a laser point included by a voxel as input and taking a characteristic vector of a preset dimension as output.
Optionally, the number determining unit 40 is configured to determine, according to the voxel characteristics of each voxel corresponding to the point cloud cluster, the target number of the detection object included in the point cloud cluster, and specifically includes:
acquiring a voxel standard feature set comprising a plurality of voxel standard features;
respectively calculating Euclidean distances between voxel characteristics of the voxels and standard characteristics of the voxels according to each voxel corresponding to the point cloud cluster;
taking the voxel characteristic of which the Euclidean distance is smaller than a second preset distance threshold value as a target voxel characteristic;
and determining the target number of the detection objects included in the point cloud cluster according to the number of the target voxel characteristics.
Optionally, the voxel standard feature set includes a plurality of feature subsets;
the number determining unit 40 is configured to determine a target number of detection objects included in the point cloud cluster according to the number of the target voxel features, and specifically includes:
respectively counting the number of target voxel characteristics corresponding to the standard characteristics of each voxel in each characteristic subset;
aiming at each feature subset, taking the maximum value in the number of target voxel features corresponding to each voxel standard feature in the feature subsets as the number of detection objects corresponding to the feature subsets;
and taking the sum of the number of the detection objects corresponding to the characteristic subsets as the target number of the detection objects included in the point cloud cluster.
Optionally, the quantity determining unit 40 is configured to perform a process of obtaining a voxel standard feature set, and specifically includes:
acquiring a sample point cloud of a sample object;
constructing a directed bounding box of the sample point cloud;
dividing a space corresponding to the directed surrounding frame of the sample point cloud into a plurality of sample voxels with preset specifications;
respectively inputting laser points included in sample voxels into a feature extraction model to obtain corresponding candidate voxel standard features;
and screening the voxel standard characteristics meeting the preset screening rule from the candidate voxel standard characteristics to obtain a voxel standard characteristic set.
Optionally, the first clustering unit 20 is configured to cluster the laser points in the point cloud to be processed by using a first preset distance threshold as an upper limit value of the laser point interval, so as to obtain at least one point cloud cluster, and specifically includes:
executing the following operations until point cloud clusters to which all laser points in the point cloud to be processed belong are determined:
constructing an initial point cloud cluster comprising a target laser point, wherein the target laser point is any one of laser points of undetermined point cloud clusters to which the target laser point belongs in the point cloud cluster to be processed;
calculating the Euclidean distance between the target laser point and the laser points outside the initial cluster;
storing the laser points with the Euclidean distance smaller than a first preset distance threshold value to an initial point cloud cluster;
sequentially taking the laser points stored in the initial clustering cluster as target laser points;
and returning to the step of calculating the Euclidean distance between the target laser point and the laser point outside the initial cluster until the Euclidean distance between the laser point outside the initial cluster and any target laser point in the initial cluster is greater than or equal to a first preset distance threshold value, and obtaining a final point cloud cluster.
Optionally, a feature subset corresponds to an object type, on this basis, referring to fig. 3, fig. 3 is a block diagram of a structure of another object detection apparatus provided in an embodiment of the present invention, and on the basis of the embodiment shown in fig. 2, the apparatus further includes:
the type determining unit 60 is configured to determine an object type corresponding to the detected object according to the feature subset to which the voxel standard feature corresponding to the target voxel feature belongs.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (10)
1. An object detection method, comprising:
acquiring point clouds to be processed;
clustering laser points in the point cloud to be processed by taking a first preset distance threshold value as an upper limit value of a laser point interval to obtain at least one point cloud cluster;
the point cloud cluster comprises at least one detection object, and the first preset distance threshold is set based on the maximum distance which possibly occurs between different parts of the same detection object;
constructing a directed surrounding frame of the point cloud cluster, and dividing a space corresponding to the directed surrounding frame into a plurality of voxels with preset specifications;
determining the target number of detection objects included in the point cloud cluster according to the voxels corresponding to the point cloud cluster;
and clustering the voxels corresponding to the point cloud clusters until the target number of voxel clusters are obtained, so that the laser points in the same voxel cluster correspond to the same detection object.
2. The object detection method according to claim 1, wherein the determining the target number of detection objects included in the point cloud cluster according to the voxels corresponding to the point cloud cluster comprises:
acquiring voxel characteristics of each voxel corresponding to the point cloud cluster, wherein the voxel characteristics are represented by characteristic vectors with preset dimensions;
and determining the target number of the detection objects included in the point cloud cluster according to the voxel characteristics of each voxel corresponding to the point cloud cluster.
3. The object detection method according to claim 2, wherein the obtaining of the voxel characteristics of each voxel corresponding to the point cloud cluster includes:
respectively taking each voxel corresponding to the point cloud cluster as a target voxel;
inputting laser points included in the target voxel into a pre-trained feature extraction model to obtain voxel features of the target voxel;
the characteristic extraction model is obtained by training a neural network by taking a laser point included by a voxel as input and taking a characteristic vector of a preset dimension as output.
4. The object detection method according to claim 2, wherein the determining the target number of detection objects included in the point cloud cluster according to the voxel characteristics of each voxel corresponding to the point cloud cluster includes:
acquiring a voxel standard feature set comprising a plurality of voxel standard features;
respectively calculating Euclidean distances between the voxel characteristics of the voxels and the standard characteristics of the voxels according to each voxel corresponding to the point cloud cluster;
taking the voxel characteristic of which the Euclidean distance is smaller than a second preset distance threshold value as a target voxel characteristic;
and determining the target number of the detection objects included in the point cloud cluster according to the number of the target voxel characteristics.
5. The object detection method of claim 4, wherein the set of voxel standard features comprises a plurality of feature subsets;
determining a target number of detection objects included in the point cloud cluster according to the number of the target voxel features includes:
respectively counting the number of target voxel characteristics corresponding to the voxel standard characteristics in each characteristic subset;
for each feature subset, taking the maximum value of the number of target voxel features corresponding to each voxel standard feature in the feature subset as the number of detection objects corresponding to the feature subset;
and taking the sum of the number of the detection objects corresponding to each characteristic subset as the target number of the detection objects included in the point cloud cluster.
6. The object detection method of claim 5, wherein a subset of the features corresponds to an object type, the method further comprising:
and determining the object type corresponding to the detected object according to the characteristic subset to which the voxel standard characteristic corresponding to the target voxel characteristic belongs.
7. The object detection method of claim 4, wherein the process of obtaining the set of voxel standard features comprises:
acquiring a sample point cloud of a sample object;
constructing a directed bounding box of the sample point cloud;
dividing a space corresponding to the directed surrounding frame of the sample point cloud into a plurality of sample voxels with the preset specification;
respectively inputting laser points included in the sample voxels into the feature extraction model to obtain corresponding candidate voxel standard features;
and screening the voxel standard characteristics meeting a preset screening rule from the candidate voxel standard characteristics to obtain the voxel standard characteristic set.
8. The object detection method according to claim 7, wherein the clustering laser points in the point cloud to be processed with the first preset distance threshold as an upper limit value of a laser point interval to obtain at least one point cloud cluster comprises:
executing the following operations until point cloud clusters to which all laser points in the point cloud to be processed belong are determined:
constructing an initial point cloud cluster comprising a target laser point, wherein the target laser point is any one of laser points of undetermined point cloud clusters in the point cloud cluster to be processed;
calculating Euclidean distances between the target laser point and the laser points outside the initial clustering cluster;
storing the laser points with the Euclidean distance smaller than a first preset distance threshold value to the initial point cloud cluster;
sequentially taking the laser points stored in the initial clustering cluster as target laser points;
and returning to the step of calculating the Euclidean distance between the target laser point and the laser point outside the initial cluster until the Euclidean distance between the laser point outside the initial cluster and any target laser point in the initial cluster is greater than or equal to the first preset distance threshold value, and obtaining a final point cloud cluster.
9. An object detecting device, comprising:
the first acquisition unit is used for acquiring point clouds to be processed;
the first clustering unit is used for clustering laser points in the point cloud to be processed by taking a first preset distance threshold value as an upper limit value of the laser point interval to obtain at least one point cloud cluster;
the point cloud cluster comprises at least one detection object, and the first preset distance threshold is set based on the maximum distance which possibly occurs between different parts of the same detection object;
the dividing unit is used for constructing a directed surrounding frame of the point cloud cluster and dividing a space corresponding to the directed surrounding frame into a plurality of voxels with preset specifications;
the quantity determining unit is used for determining the target quantity of detection objects included in the point cloud cluster according to the voxels corresponding to the point cloud cluster;
and the second clustering unit is used for clustering the voxels corresponding to the point cloud clusters until the target number of voxel clusters are obtained, so that the laser points in the same voxel cluster correspond to the same detection object.
10. The object detection apparatus according to claim 9, wherein the number determination unit, when determining the target number of detection objects included in the point cloud cluster according to the voxels corresponding to the point cloud cluster, specifically includes:
acquiring voxel characteristics of each voxel corresponding to the point cloud cluster, wherein the voxel characteristics are represented by characteristic vectors with preset dimensions;
and determining the target number of the detection objects included in the point cloud cluster according to the voxel characteristics of each voxel corresponding to the point cloud cluster.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110430291.8A CN113076922B (en) | 2021-04-21 | 2021-04-21 | Object detection method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110430291.8A CN113076922B (en) | 2021-04-21 | 2021-04-21 | Object detection method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113076922A true CN113076922A (en) | 2021-07-06 |
CN113076922B CN113076922B (en) | 2024-05-10 |
Family
ID=76618274
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110430291.8A Active CN113076922B (en) | 2021-04-21 | 2021-04-21 | Object detection method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113076922B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113447928A (en) * | 2021-08-30 | 2021-09-28 | 广东电网有限责任公司湛江供电局 | False alarm rate reduction target identification method and system based on synthetic aperture radar |
CN113781639A (en) * | 2021-09-22 | 2021-12-10 | 交通运输部公路科学研究所 | Rapid construction method of large-scene road infrastructure digital model |
CN115453545A (en) * | 2022-09-28 | 2022-12-09 | 北京京东乾石科技有限公司 | Target object detection method, apparatus, mobile device and storage medium |
CN116520289A (en) * | 2023-07-04 | 2023-08-01 | 东莞市新通电子设备有限公司 | Intelligent control method and related device for hardware machining process |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140368807A1 (en) * | 2013-06-14 | 2014-12-18 | Microsoft Corporation | Lidar-based classification of object movement |
CN105184852A (en) * | 2015-08-04 | 2015-12-23 | 百度在线网络技术(北京)有限公司 | Laser-point-cloud-based urban road identification method and apparatus |
CN108717540A (en) * | 2018-08-03 | 2018-10-30 | 浙江梧斯源通信科技股份有限公司 | The method and device of pedestrian and vehicle are distinguished based on 2D laser radars |
CN111289998A (en) * | 2020-02-05 | 2020-06-16 | 北京汽车集团有限公司 | Obstacle detection method, obstacle detection device, storage medium, and vehicle |
WO2021046716A1 (en) * | 2019-09-10 | 2021-03-18 | 深圳市大疆创新科技有限公司 | Method, system and device for detecting target object and storage medium |
CN112528781A (en) * | 2020-11-30 | 2021-03-19 | 广州文远知行科技有限公司 | Obstacle detection method, device, equipment and computer readable storage medium |
WO2021056499A1 (en) * | 2019-09-29 | 2021-04-01 | 深圳市大疆创新科技有限公司 | Data processing method and device, and movable platform |
-
2021
- 2021-04-21 CN CN202110430291.8A patent/CN113076922B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140368807A1 (en) * | 2013-06-14 | 2014-12-18 | Microsoft Corporation | Lidar-based classification of object movement |
CN105184852A (en) * | 2015-08-04 | 2015-12-23 | 百度在线网络技术(北京)有限公司 | Laser-point-cloud-based urban road identification method and apparatus |
US20180225515A1 (en) * | 2015-08-04 | 2018-08-09 | Baidu Online Network Technology (Beijing) Co. Ltd. | Method and apparatus for urban road recognition based on laser point cloud, storage medium, and device |
CN108717540A (en) * | 2018-08-03 | 2018-10-30 | 浙江梧斯源通信科技股份有限公司 | The method and device of pedestrian and vehicle are distinguished based on 2D laser radars |
WO2021046716A1 (en) * | 2019-09-10 | 2021-03-18 | 深圳市大疆创新科技有限公司 | Method, system and device for detecting target object and storage medium |
WO2021056499A1 (en) * | 2019-09-29 | 2021-04-01 | 深圳市大疆创新科技有限公司 | Data processing method and device, and movable platform |
CN111289998A (en) * | 2020-02-05 | 2020-06-16 | 北京汽车集团有限公司 | Obstacle detection method, obstacle detection device, storage medium, and vehicle |
CN112528781A (en) * | 2020-11-30 | 2021-03-19 | 广州文远知行科技有限公司 | Obstacle detection method, device, equipment and computer readable storage medium |
Non-Patent Citations (2)
Title |
---|
赵凯;徐友春;李永乐;王任栋;: "基于VG-DBSCAN算法的大场景散乱点云去噪", 光学学报, no. 10 * |
陈逍遥;任小玲;夏邢;史政坤;: "基于标记的多状态离群点去除算法", 国外电子测量技术, no. 01 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113447928A (en) * | 2021-08-30 | 2021-09-28 | 广东电网有限责任公司湛江供电局 | False alarm rate reduction target identification method and system based on synthetic aperture radar |
CN113781639A (en) * | 2021-09-22 | 2021-12-10 | 交通运输部公路科学研究所 | Rapid construction method of large-scene road infrastructure digital model |
CN113781639B (en) * | 2021-09-22 | 2023-11-28 | 交通运输部公路科学研究所 | Quick construction method for digital model of large-scene road infrastructure |
CN115453545A (en) * | 2022-09-28 | 2022-12-09 | 北京京东乾石科技有限公司 | Target object detection method, apparatus, mobile device and storage medium |
CN116520289A (en) * | 2023-07-04 | 2023-08-01 | 东莞市新通电子设备有限公司 | Intelligent control method and related device for hardware machining process |
CN116520289B (en) * | 2023-07-04 | 2023-09-01 | 东莞市新通电子设备有限公司 | Intelligent control method and related device for hardware machining process |
Also Published As
Publication number | Publication date |
---|---|
CN113076922B (en) | 2024-05-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113076922B (en) | Object detection method and device | |
CN109087510B (en) | Traffic monitoring method and device | |
CN111160379A (en) | Training method and device of image detection model and target detection method and device | |
Dubé et al. | Detection of parked vehicles from a radar based occupancy grid | |
CN111045008B (en) | Vehicle millimeter wave radar target identification method based on widening calculation | |
CN106295541A (en) | Vehicle type recognition method and system | |
CN1103086C (en) | Pattern matching apparatus in consideration of distance and direction, and method thereof | |
CN109658442B (en) | Multi-target tracking method, device, equipment and computer readable storage medium | |
Li et al. | An adaptive 3D grid-based clustering algorithm for automotive high resolution radar sensor | |
CN111695619A (en) | Multi-sensor target fusion method and device, vehicle and storage medium | |
CN112434566B (en) | Passenger flow statistics method and device, electronic equipment and storage medium | |
CN113155173A (en) | Perception performance evaluation method and device, electronic device and storage medium | |
CN113255444A (en) | Training method of image recognition model, image recognition method and device | |
CN114241448A (en) | Method and device for obtaining heading angle of obstacle, electronic equipment and vehicle | |
CN113900101A (en) | Obstacle detection method and device and electronic equipment | |
CN112163521A (en) | Vehicle driving behavior identification method, device and equipment | |
CN115083199A (en) | Parking space information determination method and related equipment thereof | |
CN110765898B (en) | Method and device for determining object and key point thereof in image | |
CN115586506B (en) | Anti-interference target classification method and device | |
CN109360137B (en) | Vehicle accident assessment method, computer readable storage medium and server | |
CN116184344A (en) | Self-adaptive vehicle-mounted millimeter wave radar DBSCAN clustering method and device | |
CN113011376B (en) | Marine ship remote sensing classification method and device, computer equipment and storage medium | |
CN111338336B (en) | Automatic driving method and device | |
CN115184951A (en) | Method, equipment and medium for detecting pedestrian and vehicle flow in real time based on laser radar | |
CN114283361A (en) | Method and apparatus for determining status information, storage medium, and electronic apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |