CN106845416B - Obstacle identification method and device, computer equipment and readable medium - Google Patents

Obstacle identification method and device, computer equipment and readable medium Download PDF

Info

Publication number
CN106845416B
CN106845416B CN201710051916.3A CN201710051916A CN106845416B CN 106845416 B CN106845416 B CN 106845416B CN 201710051916 A CN201710051916 A CN 201710051916A CN 106845416 B CN106845416 B CN 106845416B
Authority
CN
China
Prior art keywords
obstacle
point cloud
identified
templates
template
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710051916.3A
Other languages
Chinese (zh)
Other versions
CN106845416A (en
Inventor
谢国洋
郭疆
李晓晖
王亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baidu Online Network Technology Beijing Co Ltd
Original Assignee
Baidu Online Network Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Baidu Online Network Technology Beijing Co Ltd filed Critical Baidu Online Network Technology Beijing Co Ltd
Priority to CN201710051916.3A priority Critical patent/CN106845416B/en
Publication of CN106845416A publication Critical patent/CN106845416A/en
Application granted granted Critical
Publication of CN106845416B publication Critical patent/CN106845416B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides an obstacle identification method and device, computer equipment and a readable medium. The method comprises the following steps: respectively matching point clouds of obstacles to be identified with each obstacle point cloud template in a plurality of obstacle point cloud templates generated in advance to obtain a plurality of first matching scores; generating a feature vector of the obstacle to be identified according to the plurality of first matching scores; and identifying the category of the obstacle to be identified according to a pre-trained classifier model and the feature vector of the obstacle to be identified. According to the technical scheme, the point cloud of the obstacle to be recognized is respectively matched with each obstacle point cloud template in the plurality of obstacle point cloud templates generated in advance, so that the feature vector of the obstacle to be recognized contains richer information of the obstacle to be recognized, the recognition accuracy of the obstacle to be recognized can be effectively improved, and the recognition efficiency of the obstacle to be recognized can be effectively improved.

Description

Obstacle identification method and device, computer equipment and readable medium
[ technical field ] A method for producing a semiconductor device
The invention relates to the technical field of automatic driving, in particular to an obstacle identification method and device, computer equipment and a readable medium.
[ background of the invention ]
In the existing automatic driving technology, the information output by the obstacle recognition to be recognized is used as the input of the control and planning information, so that the accurate and fast recognition of the obstacle to be recognized is a very critical technology.
In the prior art, a camera and a laser radar are generally adopted to identify an obstacle to be identified. The camera scheme can be applied to the scene with sufficient illumination and relatively stable environment. However, under the conditions of bad weather and disordered road environment, the vision of the camera scheme is always unstable, so that the acquired information of the obstacle to be identified is inaccurate. While lidar is very expensive, lidar solutions are very stable and safe in identifying obstacles to be identified. In the prior art, when a laser radar is used for identifying an obstacle to be identified, the type of the obstacle to be identified is judged according to the point cloud size and the local features of the obstacle to be identified, which are acquired by scanning the obstacle to be identified by the laser radar. For example, whether the obstacle to be recognized is a person may be determined according to whether the local feature of the point cloud of the obstacle to be recognized is a head portrait of the person; and judging whether the obstacle to be identified is a bicycle or not according to whether the local characteristic of the point cloud of the obstacle to be identified is the bicycle head characteristic or not.
However, in the prior art, local features of a point cloud of an obstacle to be identified in a point cloud scanned by a laser radar are usually not so obvious, so that the obstacle to be identified is poor in identification accuracy and low in identification efficiency.
[ summary of the invention ]
The invention provides an obstacle identification method and device, computer equipment and a readable medium, which are used for improving the identification accuracy and identification efficiency of an obstacle to be identified in automatic driving.
The invention provides an obstacle identification method, which comprises the following steps:
respectively matching point clouds of obstacles to be identified with each obstacle point cloud template in a plurality of obstacle point cloud templates generated in advance to obtain a plurality of first matching scores;
generating a feature vector of the obstacle to be identified according to the plurality of first matching scores;
and identifying the category of the obstacle to be identified according to a pre-trained classifier model and the feature vector of the obstacle to be identified.
Further optionally, in the method as described above, before the point cloud of the obstacle to be identified is matched with each of the obstacle point cloud templates in the plurality of obstacle point cloud templates generated in advance, respectively, to obtain a plurality of first matching scores, the method further includes:
and generating the plurality of obstacle point cloud templates according to the top N numerical values with the highest use frequency in each direction in the obstacle information base of each category counted in advance.
Further optionally, in the method as described above, the generating the plurality of obstacle point cloud templates according to the top N numbers with the highest frequency of use in each direction in the obstacle information base of each category counted in advance specifically includes:
respectively acquiring the first N length numerical values of the point cloud of the obstacle with the highest use frequency in the length direction, the first N width numerical values of the point cloud of the obstacle with the highest use frequency in the width direction and the first N height numerical values of the point cloud of the obstacle with the highest use frequency in the height direction from the obstacle information base of each category;
combining the first N length values, the first N width values and the first N height values in the obstacle information base corresponding to each category to generate the plurality of obstacle point cloud templates.
Further optionally, in the method, matching the point cloud of the obstacle to be identified with each of a plurality of obstacle point cloud templates generated in advance to obtain a plurality of first matching scores includes:
respectively projecting the point cloud of the obstacle to be identified and each obstacle point cloud template in the plurality of obstacle point cloud templates on a length plane and a width plane;
carrying out mesh division on the projection of the point cloud of the obstacle to be identified and the projection of each obstacle point cloud template in the plurality of obstacle point cloud templates according to the same scale;
matching each grid in the projection of the point cloud of the obstacle to be identified with a corresponding grid in the projection of each obstacle point cloud template in the plurality of obstacle point cloud templates to obtain a grid matching result between the point cloud of the obstacle to be identified and each obstacle point cloud template in the plurality of obstacle point cloud templates;
and obtaining a plurality of first matching scores according to a grid matching result between the point cloud of the obstacle to be identified and each obstacle point cloud template in the plurality of obstacle point cloud templates.
Further optionally, in the method as described above, before generating the plurality of obstacle point cloud templates according to the top N numerical values which are counted in advance and have the highest frequency of use in each direction in each category of obstacle information base, the method further includes:
and carrying out classified statistics on the point cloud information of each obstacle in the obstacle training set according to the type of the obstacle to obtain the obstacle information base corresponding to each type.
Further optionally, in the method as described above, before the identifying the category of the obstacle to be identified according to a pre-trained classifier model and the feature vector of the obstacle to be identified, the method further includes:
acquiring point cloud information of a plurality of preset obstacles with marked obstacle categories to generate an obstacle training set;
and training the classifier model according to the point cloud information of the preset obstacles in the obstacle training set.
Further optionally, in the method as described above, training the classifier model according to the point cloud information of the plurality of preset obstacles in the obstacle training set specifically includes:
respectively projecting the point cloud of each preset obstacle and each obstacle point cloud template in the plurality of obstacle point cloud templates on a length plane and a width plane;
carrying out mesh division on the projection of the point cloud of each preset obstacle and the projection of each obstacle point cloud template in the plurality of obstacle point cloud templates according to the same scale;
matching each grid in the projection of the point cloud of each preset obstacle with a corresponding grid in the projection of each obstacle point cloud template in the plurality of obstacle point cloud templates to obtain a grid matching result between the point cloud of each preset obstacle and each obstacle point cloud template in the plurality of obstacle point cloud templates;
and obtaining a plurality of second matching scores according to the grid matching result between the point cloud of each preset obstacle and each obstacle point cloud template in the plurality of obstacle point cloud templates.
Obtaining a feature vector of the point cloud of the corresponding preset obstacle according to the plurality of second matching scores corresponding to the point cloud of the preset obstacle;
and training a classifier model according to the feature vector of the point cloud of each preset obstacle and the corresponding category of the point cloud of the preset obstacle, so as to determine the classifier model.
The present invention also provides an obstacle recognition apparatus, the apparatus including:
the matching module is used for respectively matching the point cloud of the obstacle to be identified with each obstacle point cloud template in a plurality of obstacle point cloud templates generated in advance to obtain a plurality of first matching scores;
the feature vector generation module is used for generating feature vectors of the obstacles to be identified according to the plurality of first matching scores;
and the identification module is used for identifying the category of the obstacle to be identified according to a pre-trained classifier model and the feature vector of the obstacle to be identified.
Further optionally, in the apparatus as described above, further comprising:
and the template generating module is used for generating the plurality of obstacle point cloud templates according to the first N numerical values which are counted in advance and have the highest use frequency in each direction in the obstacle information base of each category.
Further optionally, in the apparatus as described above, the template generating module is specifically configured to:
respectively acquiring the first N length numerical values of the point cloud of the obstacle with the highest use frequency in the length direction, the first N width numerical values of the point cloud of the obstacle with the highest use frequency in the width direction and the first N height numerical values of the point cloud of the obstacle with the highest use frequency in the height direction from the obstacle information base of each category;
combining the first N length values, the first N width values and the first N height values in the obstacle information base corresponding to each category to generate the plurality of obstacle point cloud templates.
Further optionally, in the apparatus as described above, the matching module is specifically configured to:
respectively projecting the point cloud of the obstacle to be identified and each obstacle point cloud template in the plurality of obstacle point cloud templates on a length plane and a width plane;
carrying out mesh division on the projection of the point cloud of the obstacle to be identified and the projection of each obstacle point cloud template in the plurality of obstacle point cloud templates according to the same scale;
matching each grid in the projection of the point cloud of the obstacle to be identified with a corresponding grid in the projection of each obstacle point cloud template in the plurality of obstacle point cloud templates to obtain a grid matching result between the point cloud of the obstacle to be identified and each obstacle point cloud template in the plurality of obstacle point cloud templates;
and obtaining a plurality of first matching scores according to a grid matching result between the point cloud of the obstacle to be identified and each obstacle point cloud template in the plurality of obstacle point cloud templates.
Further optionally, in the apparatus as described above, further comprising:
and the classification module is used for performing classification statistics on the point cloud information of each obstacle in the obstacle training set according to the type of the obstacle to obtain the obstacle information base corresponding to each type.
Further optionally, in the apparatus as described above, further comprising:
the acquisition module is used for acquiring point cloud information of a plurality of preset obstacles marked with obstacle categories to generate an obstacle training set;
and the training module is used for training the classifier model according to the point cloud information of the preset obstacles in the obstacle training set.
Further optionally, in the apparatus as described above, the training module is specifically configured to:
respectively projecting the point cloud of each preset obstacle and each obstacle point cloud template in the plurality of obstacle point cloud templates on a length plane and a width plane;
carrying out mesh division on the projection of the point cloud of each preset obstacle and the projection of each obstacle point cloud template in the plurality of obstacle point cloud templates according to the same scale;
matching each grid in the projection of the point cloud of each preset obstacle with a corresponding grid in the projection of each obstacle point cloud template in the plurality of obstacle point cloud templates to obtain a grid matching result between the point cloud of each preset obstacle and each obstacle point cloud template in the plurality of obstacle point cloud templates;
and obtaining a plurality of second matching scores according to the grid matching result between the point cloud of each preset obstacle and each obstacle point cloud template in the plurality of obstacle point cloud templates.
Obtaining a feature vector of the point cloud of the corresponding preset obstacle according to the plurality of second matching scores corresponding to the point cloud of the preset obstacle;
and training a classifier model according to the feature vector of the point cloud of each preset obstacle and the corresponding category of the point cloud of the preset obstacle, so as to determine the classifier model.
The invention also provides a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the obstacle identification method as described above when executing the program.
The invention also provides a computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the obstacle identification method as described above.
According to the obstacle identification method and device, the computer equipment and the readable medium, a plurality of first matching scores are obtained by respectively matching the point cloud of the obstacle to be identified with each obstacle point cloud template in a plurality of obstacle point cloud templates generated in advance; generating a feature vector of the obstacle to be identified according to the plurality of first matching scores; and identifying the category of the obstacle to be identified according to the pre-trained classifier model and the feature vector of the obstacle to be identified. Compared with the identification method for identifying the obstacle to be identified according to the size and the local characteristics of the point cloud of the obstacle to be identified in the prior art, the technical scheme of the invention matches the point cloud of the obstacle to be identified with each obstacle point cloud template in a plurality of obstacle point cloud templates generated in advance respectively, so that the feature vector of the obstacle to be identified contains more abundant information of the obstacle to be identified, identifies the feature vector of the obstacle to be identified according to a classifier model trained in advance to determine the category of the obstacle to be identified, can effectively improve the identification accuracy of the obstacle to be identified, and can effectively improve the identification efficiency of the obstacle to be identified.
[ description of the drawings ]
Fig. 1 is a flowchart of an obstacle identification method according to an embodiment of the present invention.
Fig. 2 is a structural diagram of a first obstacle recognition device according to an embodiment of the present invention.
Fig. 3 is a structural diagram of a second obstacle recognition device according to an embodiment of the present invention.
Fig. 4 is a block diagram of a computer device provided by the present invention.
[ detailed description ] embodiments
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in detail with reference to the accompanying drawings and specific embodiments.
Fig. 1 is a flowchart of an obstacle identification method according to an embodiment of the present invention. As shown in fig. 1, the obstacle identification method of this embodiment may specifically include the following steps:
100. respectively matching the point cloud of the obstacle to be identified with each obstacle point cloud template in a plurality of obstacle point cloud templates generated in advance to obtain a plurality of first matching scores;
the obstacle recognition method of the embodiment is applied to the technical field of automatic driving. In automatic driving, a vehicle is required to be capable of automatically identifying obstacles in a road so as to make a decision and control in time during vehicle driving, and the vehicle can safely drive conveniently. The execution subject of the obstacle recognition method of the embodiment may be an obstacle recognition device, which may be integrated by a plurality of modules, and the obstacle recognition device may be specifically provided in an autonomous vehicle to control the autonomous vehicle.
The point cloud of the obstacle to be identified in this embodiment may be obtained by scanning with a laser radar. The specifications of the laser radar may be 16-wire, 32-wire, 64-wire, etc. Wherein a higher number of lines indicates a higher specific energy density of the lidar. The obstacle point cloud template of the present embodiment may be some obstacle point clouds preset according to experience, and since the obstacle point clouds are adopted as the template, the types of the obstacle point clouds serving as the template are also determined. In this embodiment, first, the obstacle point cloud to be identified is respectively matched with each pre-generated obstacle point cloud template, and specifically, the result of the matching may be obtained. Obtaining a first matching score; for a plurality of obstacle point cloud templates, a plurality of first match scores can be obtained.
101. Generating a feature vector of the obstacle to be identified according to the plurality of first matching scores;
specifically, the plurality of first matching scores are arranged in a line to form a one-dimensional vector, and a feature vector of the obstacle to be identified is obtained.
102. And identifying the category of the obstacle to be identified according to the pre-trained classifier model and the feature vector of the obstacle to be identified.
In this embodiment, a classifier model is trained in advance, the input of the classifier model may be a feature vector of the obstacle, and the output may be a category of the obstacle. In this way, the feature vector of the obstacle to be recognized acquired in the above embodiment is input to the classifier model, and the class output by the classifier model is the class of the obstacle to be recognized.
In the obstacle identification method of the embodiment, a plurality of first matching scores are obtained by respectively matching point clouds of obstacles to be identified with each obstacle point cloud template in a plurality of obstacle point cloud templates generated in advance; generating a feature vector of the obstacle to be identified according to the plurality of first matching scores; and identifying the category of the obstacle to be identified according to the pre-trained classifier model and the feature vector of the obstacle to be identified. Compared with the identification method for identifying the obstacle to be identified according to the size and the local characteristics of the point cloud of the obstacle to be identified in the prior art, according to the technical scheme of the embodiment, the point cloud of the obstacle to be identified is respectively matched with each obstacle point cloud template in a plurality of obstacle point cloud templates generated in advance, so that the feature vector of the obstacle to be identified contains more abundant information of the obstacle to be identified, the feature vector of the obstacle to be identified is identified according to a classifier model trained in advance, the category of the obstacle to be identified is determined, the identification accuracy of the obstacle to be identified can be effectively improved, and the identification efficiency of the obstacle to be identified can be effectively improved.
Further optionally, on the basis of the technical solution of the embodiment shown in fig. 1, before step 100 "match the point cloud of the obstacle to be identified with each obstacle point cloud template in the plurality of obstacle point cloud templates generated in advance, respectively, to obtain a plurality of first matching scores", the following steps may be further included: and generating a plurality of obstacle point cloud templates according to the top N numerical values with the highest use frequency in each direction in the obstacle information base of each category counted in advance.
In this embodiment, the categories of the obstacles to be recognized may be classified into pedestrians, bicycles, cars, buses, or other categories. When an obstacle is identified, the category of the uncertain obstacle is identified as another category. And according to the practical application, the types of the obstacles can be increased step by step according to new vehicles appearing in the road, and the classifier model is updated and trained according to the point clouds of countless obstacles of the types, so that the updated classifier model can also identify the obstacles of the newly increased types.
Specifically, statistics may be performed on various types of obstacle information such as point cloud information in advance, for example, point cloud information of all obstacles belonging to pedestrians collected is classified as a type of obstacle information base as a pedestrian; classifying the collected point cloud information of all obstacles belonging to the bicycle into one type as an obstacle information base of the bicycle; classifying the collected point cloud information of all obstacles belonging to the car into one type as an obstacle information base of the car; collecting point cloud information of all obstacles belonging to the bus into a category which is used as an obstacle information base of the bus; and classifying the collected point cloud information of all obstacles which do not belong to any one of the classes into one class as an obstacle information base of the other class. Specifically stored in the obstacle information base are length, width, and height information of point clouds of obstacles, or may further include a total number of points included in each obstacle point cloud, or may further include a specification of a laser radar to be used, and the like. When the laser radar scans the obstacle to obtain the point cloud of the obstacle, the coordinates of each point in the point cloud of the obstacle can be determined. The origin of the corresponding coordinate system used is the centroid position of the vehicle currently carrying the lidar. According to the coordinates of all points in the point cloud of the obstacle, the maximum value ymax and the minimum value ymin of the point cloud of the obstacle in the length direction, the maximum value xmax and the minimum value xmin in of the width direction and the maximum value zmax and the minimum value zmin of the height direction can be determined, then the length of the obstacle can be taken to be equal to the maximum value minus the minimum value ymax-ymin of the length direction in the point cloud, the width of the obstacle is equal to the maximum value minus the minimum value xmax-xmin of the width direction in the point cloud, and the height of the obstacle is higher than the maximum value minus the minimum value zmax-zmin of the height direction in the point cloud.
Then, in each category of obstacle information base, the first N numerical values with the highest use frequency in each direction are taken, for example, in the obstacle information base corresponding to the pedestrian, point cloud information of pedestrians with various sizes can be included, the first N numerical values with the highest use frequency in each direction can be acquired from the obstacle information base corresponding to the pedestrian, and a plurality of obstacle point cloud templates corresponding to the pedestrian are generated. In this embodiment, the point clouds of all obstacles are stored according to the length, the width and the height, and then the first N number of values with the largest number in the length, the width and the height directions are respectively obtained from the obstacle information base corresponding to the pedestrian. Similarly, the first N numerical values with the highest use frequency in each direction can be obtained from the obstacle information base corresponding to the bicycle, the car or other categories, a plurality of obstacle point cloud templates corresponding to the bicycle, the car or other categories are respectively obtained, and finally the obstacle point cloud templates corresponding to the categories are collected together to serve as a plurality of obstacle point cloud templates generated in advance.
Further optionally, the generating a plurality of obstacle point cloud templates according to the top N numerical values with the highest frequency of use in each direction in the obstacle information base of each category counted in advance may specifically include the following steps:
(a1) respectively acquiring the first N length values of the point cloud of the obstacle with the highest use frequency in the length direction, the first N width values of the point cloud of the obstacle with the highest use frequency in the width direction and the first N height values of the point cloud of the obstacle with the highest use frequency in the height direction from the obstacle information base of each category;
(a2) and combining the first N length values, the first N width values and the first N height values in the obstacle information base corresponding to each category to generate a plurality of obstacle point cloud templates.
In this embodiment, taking 5 types of obstacles as an example, the first N length values, the first N width values, and the first N height values, which are most frequently used, are obtained from the obstacle information base of each type. To expand the range of the templates, the first N length values, the first N width values, and the first N height values may be combined, so that each class may obtain N × N obstacle point cloud templates, and 5 classes may obtain 5 × N obstacle point cloud templates in total. For example, when N is 3, in the obstacle information base of each category, 3 length values with the highest use frequency, 3 width values with the highest use frequency, and 3 height values with the highest use frequency may be selected; and then combining the 3 length values with the highest use frequency, the 3 width values with the highest use frequency and the 3 height values with the highest use frequency to obtain 27 obstacle point cloud templates corresponding to the category. If the obstacles are in 5 categories, 27 × 5 obstacle point cloud templates can be obtained. In practical application, the numerical value of N can be selected according to experience, the larger the numerical value of N is, the more the obtained obstacle point cloud templates are, the more accurate the obstacle to be recognized is, but the larger the calculation amount in the recognition process is.
In the step (a1), "respectively matching the point cloud of the obstacle to be identified with each obstacle point cloud template in a plurality of obstacle point cloud templates generated in advance to obtain a plurality of first matching scores", specifically, the method may include the following steps:
(b1) respectively projecting the point cloud of the obstacle to be identified and each obstacle point cloud template in the plurality of obstacle point cloud templates on a length plane and a width plane;
(b2) carrying out grid division on the projection of the point cloud of the obstacle to be identified and the projection of each obstacle point cloud template in the plurality of obstacle point cloud templates according to the same scale;
(b3) matching each grid in the projection of the point cloud of the obstacle to be identified with a corresponding grid in the projection of each obstacle point cloud template in the plurality of obstacle point cloud templates to obtain a grid matching result between the point cloud of the obstacle to be identified and each obstacle point cloud template in the plurality of obstacle point cloud templates;
(b4) and obtaining a plurality of first matching scores according to the grid matching result between the point cloud of the obstacle to be identified and each obstacle point cloud template in the plurality of obstacle point cloud templates.
In the embodiment, the length direction is the y direction, the width direction is the x direction, and the height direction is the z direction, when an obstacle to be identified is identified, point clouds of the obstacle to be identified and each obstacle point cloud template in a plurality of obstacle point cloud templates obtained in the embodiment are respectively projected on an xy plane; then, carrying out mesh division on the projection of the point cloud of the obstacle to be identified on the xy plane and the projection of each obstacle point cloud template on the xy plane according to the same scale; in this way, the projection view of the point cloud of the obstacle to be recognized in the xy plane and the number of grids included in the projection view of each obstacle point cloud template in the xy plane are the same. And then matching each grid in the projection of the point cloud of the obstacle to be identified with the corresponding grid in the projection of the point cloud template of each obstacle, wherein if the states are the same, the matching result of the grid can be classified as 1, and if the states are different, the matching result of the grid can be classified as 0. For example, if a projection point exists in a certain mesh in the projection of the point cloud of the obstacle to be identified, and if a projection point exists in a corresponding mesh in the projection of a certain obstacle point cloud template, the two states are the same, and the matching result of the mesh is classified as 1; otherwise, if the corresponding grid in the projection of a certain obstacle point cloud template has no projection point, the two states are different, and the matching result of the grid is classified as 0. Matching all grids in the projection of the point cloud of the obstacle to be identified with all corresponding grids in the projection of a certain obstacle point cloud template, and adding the scores of all the grids to obtain a first matching score for matching the projection of the point cloud of the obstacle to be identified with the projection of the obstacle point cloud template. In the same way, the projections of the point clouds of the obstacles to be identified can be matched with the projections of the point cloud templates of the obstacles respectively, and a plurality of first matching scores can be obtained.
Further optionally, before the step "generating a plurality of obstacle point cloud templates according to the top N numerical values with the highest frequency of use in each direction in the obstacle information base of each category counted in advance", the method may further include the following steps: and carrying out classified statistics on the point cloud information of each obstacle in the obstacle training set according to the type of the obstacle to obtain an obstacle information base corresponding to each type.
In the point cloud information of the obstacles collected in the obstacle training set, the type of each obstacle is determined. The point cloud information of each obstacle in the obstacle training set can be classified according to the type of the obstacle, and the point cloud information of the obstacles in each type forms an obstacle information base corresponding to the type.
Further optionally, on the basis of the technical solution of the embodiment shown in fig. 1, before the step 102 "identifying the category of the obstacle to be identified according to the pre-trained classifier model and the feature vector of the obstacle to be identified", the following steps may be further included:
(c1) acquiring point cloud information of a plurality of preset obstacles with marked obstacle categories to generate an obstacle training set;
(c2) and training a classifier model according to the point cloud information of a plurality of preset obstacles in the obstacle training set.
In this embodiment, the number of the point cloud information of the preset obstacle included in the obstacle training set may be many, for example, more than 5000 or more than ten thousand or more, and the more the number of the point cloud information of the preset obstacle included in the obstacle training set is, the more accurate the parameters of the determined classifier model are when the classifier model is trained, and the more accurate the classification of the obstacle to be recognized according to the classifier model is subsequently recognized.
Further optionally, the step (c2) "training the classifier model according to the point cloud information of the plurality of preset obstacles in the obstacle training set", specifically may include the steps of:
(d1) respectively projecting the point cloud of each preset obstacle and each obstacle point cloud template in the plurality of obstacle point cloud templates on a length plane and a width plane;
(d2) carrying out grid division on the projection of the point cloud of each preset obstacle and the projection of each obstacle point cloud template in a plurality of obstacle point cloud templates according to the same scale;
(d3) matching each grid in the projection of the point cloud of each preset obstacle with a corresponding grid in the projection of each obstacle point cloud template in the plurality of obstacle point cloud templates to obtain a grid matching result between the point cloud of each preset obstacle and each obstacle point cloud template in the plurality of obstacle point cloud templates;
(d4) and obtaining a plurality of second matching scores according to the grid matching result between the point cloud of each preset obstacle and each obstacle point cloud template in the plurality of obstacle point cloud templates.
(d5) Obtaining a feature vector of the point cloud of the corresponding preset obstacle according to a plurality of second matching scores corresponding to the point cloud of each preset obstacle;
(d6) and training a classifier model according to the feature vector of the point cloud of each preset obstacle and the corresponding category of the point cloud of the preset obstacle, thereby determining the classifier model.
In this embodiment, the implementation processes of the steps (d1) - (d5) can refer to the implementation processes of the steps (b1) - (b4) and the step 101 in detail. That is, the generation method of the feature vector of each preset obstacle is the same as the generation method of the feature vector of the obstacle to be recognized in the process of recognizing the obstacle category to be recognized using the classifier model. That is, in the present embodiment, the scale used for meshing the projection of the point cloud of the obstacle to be recognized and the projection of each obstacle point cloud template when recognizing the obstacle must be the same as the scale used for meshing the projection of the point cloud of each preset obstacle and the projection of each obstacle point cloud template when training the classifier model. When the obstacle to be recognized is recognized, the specification of the laser radar adopted for obtaining the point cloud of the obstacle to be recognized is required to be the same as that of the laser radar adopted for obtaining the point cloud of each preset obstacle during training, otherwise, the number of points included in the point cloud is not in one level, and the obstacle to be recognized cannot be recognized accurately.
And finally, training the classifier model according to the feature vectors of the preset obstacles and the classes of the preset obstacles, so that the classifier model can output the classes of the preset obstacles when inputting the feature vectors of the preset obstacles. During training, because the type of the preset obstacle is known, if the feature vector of the preset obstacle is input, the output type of the preset obstacle does not conform to the type of the preset obstacle known in advance, and parameters of the classifier model can be adjusted, so that the type of the preset obstacle output by the classifier model conforms to the type of the preset obstacle known in advance. By training the classifier model using the point cloud information of the countless preset obstacles in the obstacle training set through the above steps (d1) - (d5), the parameters of the classifier model can be determined, thereby determining the classifier model. At this time, if the feature vector of the obstacle to be recognized is input into the already determined classifier model, the classifier model may input the category of the obstacle to be recognized.
The classifier model of this embodiment may be any one of a random forest model, a decision tree model, a logistic regression model, a Support Vector Machine (SVM) model, and a neural network model, which is not limited herein.
By adopting the obstacle identification method of the embodiment, after the automatic driving vehicle scans the point cloud of the obstacle to be identified through the laser radar, the obstacle to be identified can be identified according to the obstacle identification method, and the driving of the vehicle can be further controlled according to the type of the obstacle, for example, the vehicle is controlled to avoid the obstacle, so that the driving safety of the automatic driving vehicle is effectively improved.
By adopting the technical scheme of the embodiment, not only can an accurate classifier model be trained, but also the feature vector of the obstacle to be recognized can contain richer information of the obstacle to be recognized, and the feature vector of the obstacle to be recognized is recognized according to the pre-trained classifier model so as to determine the category of the obstacle to be recognized. Therefore, the technical scheme of the embodiment can effectively improve the identification accuracy of the obstacle to be identified, so that the identification efficiency of the obstacle to be identified can be effectively improved.
Fig. 2 is a structural diagram of a first obstacle recognition device according to an embodiment of the present invention. As shown in fig. 2, the obstacle identification device of the present embodiment may specifically include: a matching module 10, a feature vector generation module 11 and a recognition module 12.
The matching module 10 is used for respectively matching the point cloud of the obstacle to be identified with each obstacle point cloud template in a plurality of obstacle point cloud templates generated in advance to obtain a plurality of first matching scores; the feature vector generation module 11 is configured to generate a feature vector of the obstacle to be identified according to the plurality of first matching scores obtained by matching in the matching module 10; the recognition module 12 is configured to recognize a category of the obstacle to be recognized according to the pre-trained classifier model and the feature vector of the obstacle to be recognized generated by the feature vector generation module 11.
The obstacle identification device of this embodiment identifies the obstacle to be identified by using the module, and the implementation principle and the technical effect of the related method embodiment are the same, so that reference may be made to the description of the related method embodiment in detail, and details are not repeated here.
Fig. 3 is a structural diagram of a second obstacle recognition device according to an embodiment of the present invention. As shown in fig. 3, the obstacle recognition device of the present embodiment further describes the technical solution of the present invention in more detail on the basis of the technical solution of the embodiment shown in fig. 2.
As shown in fig. 3, the obstacle recognition device of the present embodiment further includes: a template generation module 13. The template generating module 13 is configured to generate a plurality of obstacle point cloud templates according to the top N numerical values that are counted in advance and have the highest frequency of use in each direction in the obstacle information base of each category.
Correspondingly, the matching module 10 is configured to match the point cloud of the obstacle to be identified with each obstacle point cloud template in the plurality of obstacle point cloud templates generated in advance by the template generation module 13, respectively, to obtain a plurality of first matching scores;
further optionally, in the obstacle identification apparatus of this embodiment, the template generating module 13 is specifically configured to:
respectively acquiring the first N length values of the point cloud of the obstacle with the highest use frequency in the length direction, the first N width values of the point cloud of the obstacle with the highest use frequency in the width direction and the first N height values of the point cloud of the obstacle with the highest use frequency in the height direction from the obstacle information base of each category;
and combining the first N length values, the first N width values and the first N height values in the obstacle information base corresponding to each category to generate a plurality of obstacle point cloud templates.
Further optionally, in the obstacle identification apparatus of this embodiment, the matching module 10 is specifically configured to:
respectively projecting the point cloud of the obstacle to be identified and each obstacle point cloud template in a plurality of obstacle point cloud templates generated in advance by the template generation module 13 on a length plane and a width plane;
carrying out grid division on the projection of the point cloud of the obstacle to be identified and the projection of each obstacle point cloud template in the plurality of obstacle point cloud templates according to the same scale;
matching each grid in the projection of the point cloud of the obstacle to be identified with a corresponding grid in the projection of each obstacle point cloud template in the plurality of obstacle point cloud templates to obtain a grid matching result between the point cloud of the obstacle to be identified and each obstacle point cloud template in the plurality of obstacle point cloud templates;
and obtaining a plurality of first matching scores according to the grid matching result between the point cloud of the obstacle to be identified and each obstacle point cloud template in the plurality of obstacle point cloud templates.
Further optionally, as shown in fig. 3, the obstacle identification device of this embodiment further includes: a classification module 14. The classification module 14 is configured to perform classification statistics on the point cloud information of each obstacle in the obstacle training set according to the category of the obstacle, so as to obtain an obstacle information base corresponding to each category.
Correspondingly, the template generating module 13 is configured to generate a plurality of obstacle point cloud templates according to the first N numerical values with the highest use frequency in each direction in the obstacle information base of each category counted in advance by the classifying module 14.
Further optionally, as shown in fig. 3, the obstacle identification device of this embodiment further includes: an acquisition module 15 and a training module 16.
The acquisition module 15 is configured to acquire point cloud information of a plurality of preset obstacles with marked obstacle categories, and generate an obstacle training set;
the training module 16 is configured to train a classifier model according to the point cloud information of a plurality of preset obstacles in the obstacle training set.
Correspondingly, the classification module 14 is configured to perform classification statistics on the point cloud information of each obstacle in the obstacle training set acquired by the acquisition module 15 according to the category of the obstacle, so as to obtain an obstacle information base corresponding to each category.
Further optionally, in the obstacle recognition device of this embodiment, the training module 16 is specifically configured to:
respectively projecting the point cloud of each preset obstacle and each obstacle point cloud template in the plurality of obstacle point cloud templates generated by the template generation module 13 on a length plane and a width plane;
carrying out grid division on the projection of the point cloud of each preset obstacle and the projection of each obstacle point cloud template in a plurality of obstacle point cloud templates according to the same scale;
matching each grid in the projection of the point cloud of each preset obstacle with a corresponding grid in the projection of each obstacle point cloud template in the plurality of obstacle point cloud templates to obtain a grid matching result between the point cloud of each preset obstacle and each obstacle point cloud template in the plurality of obstacle point cloud templates;
and obtaining a plurality of second matching scores according to the grid matching result between the point cloud of each preset obstacle and each obstacle point cloud template in the plurality of obstacle point cloud templates.
Obtaining a feature vector of the point cloud of the corresponding preset obstacle according to a plurality of second matching scores corresponding to the point cloud of each preset obstacle;
and training a classifier model according to the feature vector of the point cloud of each preset obstacle and the corresponding category of the point cloud of the preset obstacle, thereby determining the classifier model.
Correspondingly, the recognition module 12 is configured to recognize the category of the obstacle to be recognized according to the classifier model trained in advance by the training module 16 and the feature vector of the obstacle to be recognized generated by the feature vector generation module 11.
The obstacle identification device of this embodiment identifies the obstacle to be identified by using the module, and the implementation principle and the technical effect of the related method embodiment are the same, so that reference may be made to the description of the related method embodiment in detail, and details are not repeated here.
The invention also provides a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor executes the computer program to implement the obstacle identification method as shown in the above embodiments.
For example, fig. 4 is a block diagram of a computer device provided in the present invention. FIG. 4 illustrates a block diagram of an exemplary computer device 12a suitable for use in implementing embodiments of the present invention. The computer device 12a shown in FIG. 4 is only an example and should not bring any limitations to the functionality or scope of use of embodiments of the present invention.
As shown in FIG. 4, computer device 12a is in the form of a general purpose computing device. The components of computer device 12a may include, but are not limited to: one or more processors or processors 16a, a system memory 28a, and a bus 18a that connects the various system components (including the system memory 28a and the processors 16 a).
Bus 18a represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Computer device 12a typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer device 12a and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 28a may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)30a and/or cache memory 32 a. Computer device 12a may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34a may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 4, and commonly referred to as a "hard drive"). Although not shown in FIG. 4, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 18a by one or more data media interfaces. Memory 28a may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of the various embodiments of the invention described above in fig. 1-3.
A program/utility 40a having a set (at least one) of program modules 42a may be stored, for example, in memory 28a, such program modules 42a including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may include an implementation of a network environment. Program modules 42a generally perform the functions and/or methodologies described above in connection with the various embodiments of fig. 1-3 of the present invention.
Computer device 12a may also communicate with one or more external devices 14a (e.g., keyboard, pointing device, display 24a, etc.), with one or more devices that enable a user to interact with computer device 12a, and/or with any devices (e.g., network card, modem, etc.) that enable computer device 12a to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 22 a. Also, computer device 12a may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet) through network adapter 20 a. As shown, network adapter 20a communicates with the other modules of computer device 12a via bus 18 a. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with computer device 12a, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processor 16a executes various functional applications and data processing by executing programs stored in the system memory 28a, for example, to implement the obstacle recognition method shown in the above-described embodiment.
The present invention also provides a computer-readable medium on which a computer program is stored, which when executed by a processor implements the obstacle identifying method as shown in the above embodiments.
The computer-readable media of this embodiment may include RAM30a, and/or cache memory 32a, and/or storage system 34a in system memory 28a in the embodiment illustrated in fig. 4 described above.
With the development of technology, the propagation path of computer programs is no longer limited to tangible media, and the computer programs can be directly downloaded from a network or acquired by other methods. Accordingly, the computer-readable medium in the present embodiment may include not only tangible media but also intangible media.
The computer-readable medium of the present embodiments may take any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
In the embodiments provided in the present invention, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the units is only one logical functional division, and other divisions may be realized in practice.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute some steps of the methods according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (16)

1. An obstacle identification method, characterized in that the method comprises:
respectively matching point clouds of obstacles to be identified with each obstacle point cloud template in a plurality of obstacle point cloud templates generated in advance to obtain a plurality of first matching scores;
generating a feature vector of the obstacle to be identified according to the plurality of first matching scores;
identifying the category of the obstacle to be identified according to a pre-trained classifier model and the feature vector of the obstacle to be identified;
wherein generating a feature vector of the obstacle to be identified according to the plurality of first match scores comprises:
and arranging the plurality of first matching scores into a row to form a one-dimensional vector as a characteristic vector of the obstacle to be identified.
2. The method according to claim 1, wherein before matching the point cloud of the obstacle to be identified with each of a plurality of obstacle point cloud templates generated in advance to obtain a plurality of first matching scores, the method further comprises:
and generating the plurality of obstacle point cloud templates according to the top N numerical values with the highest use frequency in each direction in the obstacle information base of each category counted in advance.
3. The method according to claim 2, wherein the generating the plurality of obstacle point cloud templates according to the top N numerical values with the highest frequency of use in each direction in the obstacle information base of each category counted in advance specifically includes:
respectively acquiring the first N length numerical values of the point cloud of the obstacle with the highest use frequency in the length direction, the first N width numerical values of the point cloud of the obstacle with the highest use frequency in the width direction and the first N height numerical values of the point cloud of the obstacle with the highest use frequency in the height direction from the obstacle information base of each category;
combining the first N length values, the first N width values and the first N height values in the obstacle information base corresponding to each category to generate the plurality of obstacle point cloud templates.
4. The method according to claim 2, wherein the step of matching the point cloud of the obstacle to be identified with each of a plurality of obstacle point cloud templates generated in advance to obtain a plurality of first matching scores includes:
respectively projecting the point cloud of the obstacle to be identified and each obstacle point cloud template in the plurality of obstacle point cloud templates on a length plane and a width plane;
carrying out mesh division on the projection of the point cloud of the obstacle to be identified and the projection of each obstacle point cloud template in the plurality of obstacle point cloud templates according to the same scale;
matching each grid in the projection of the point cloud of the obstacle to be identified with a corresponding grid in the projection of each obstacle point cloud template in the plurality of obstacle point cloud templates to obtain a grid matching result between the point cloud of the obstacle to be identified and each obstacle point cloud template in the plurality of obstacle point cloud templates;
and obtaining a plurality of first matching scores according to a grid matching result between the point cloud of the obstacle to be identified and each obstacle point cloud template in the plurality of obstacle point cloud templates.
5. The method according to any one of claims 2 to 4, wherein before generating the plurality of obstacle point cloud templates according to the top N numerical values which are counted in advance and have the highest frequency of use in each direction in each category of obstacle information base, the method further comprises:
and carrying out classified statistics on the point cloud information of each obstacle in the obstacle training set according to the type of the obstacle to obtain the obstacle information base corresponding to each type.
6. The method of claim 5, wherein prior to identifying the class of the obstacle to be identified based on a pre-trained classifier model and the feature vector of the obstacle to be identified, the method further comprises:
acquiring point cloud information of a plurality of preset obstacles with marked obstacle categories to generate an obstacle training set;
and training the classifier model according to the point cloud information of the preset obstacles in the obstacle training set.
7. The method according to claim 6, wherein training the classifier model according to the point cloud information of the plurality of preset obstacles in the obstacle training set specifically comprises:
respectively projecting the point cloud of each preset obstacle and each obstacle point cloud template in the plurality of obstacle point cloud templates on a length plane and a width plane;
carrying out mesh division on the projection of the point cloud of each preset obstacle and the projection of each obstacle point cloud template in the plurality of obstacle point cloud templates according to the same scale;
matching each grid in the projection of the point cloud of each preset obstacle with a corresponding grid in the projection of each obstacle point cloud template in the plurality of obstacle point cloud templates to obtain a grid matching result between the point cloud of each preset obstacle and each obstacle point cloud template in the plurality of obstacle point cloud templates;
obtaining a plurality of second matching scores according to a grid matching result between the point cloud of each preset obstacle and each obstacle point cloud template in the plurality of obstacle point cloud templates;
obtaining a feature vector of the point cloud of the corresponding preset obstacle according to the plurality of second matching scores corresponding to the point cloud of the preset obstacle;
and training a classifier model according to the feature vector of the point cloud of each preset obstacle and the corresponding category of the point cloud of the preset obstacle, so as to determine the classifier model.
8. An obstacle recognition apparatus, characterized in that the apparatus comprises:
the matching module is used for respectively matching the point cloud of the obstacle to be identified with each obstacle point cloud template in a plurality of obstacle point cloud templates generated in advance to obtain a plurality of first matching scores;
the feature vector generation module is used for generating feature vectors of the obstacles to be identified according to the plurality of first matching scores;
the identification module is used for identifying the category of the obstacle to be identified according to a pre-trained classifier model and the feature vector of the obstacle to be identified;
the feature vector generation module is specifically configured to arrange the plurality of first matching scores in a row to form a one-dimensional vector, which is used as the feature vector of the obstacle to be identified.
9. The apparatus of claim 8, further comprising:
and the template generating module is used for generating the plurality of obstacle point cloud templates according to the first N numerical values which are counted in advance and have the highest use frequency in each direction in the obstacle information base of each category.
10. The apparatus of claim 9, wherein the template generation module is specifically configured to:
respectively acquiring the first N length numerical values of the point cloud of the obstacle with the highest use frequency in the length direction, the first N width numerical values of the point cloud of the obstacle with the highest use frequency in the width direction and the first N height numerical values of the point cloud of the obstacle with the highest use frequency in the height direction from the obstacle information base of each category;
combining the first N length values, the first N width values and the first N height values in the obstacle information base corresponding to each category to generate the plurality of obstacle point cloud templates.
11. The apparatus of claim 9, wherein the matching module is specifically configured to:
respectively projecting the point cloud of the obstacle to be identified and each obstacle point cloud template in the plurality of obstacle point cloud templates on a length plane and a width plane;
carrying out mesh division on the projection of the point cloud of the obstacle to be identified and the projection of each obstacle point cloud template in the plurality of obstacle point cloud templates according to the same scale;
matching each grid in the projection of the point cloud of the obstacle to be identified with a corresponding grid in the projection of each obstacle point cloud template in the plurality of obstacle point cloud templates to obtain a grid matching result between the point cloud of the obstacle to be identified and each obstacle point cloud template in the plurality of obstacle point cloud templates;
and obtaining a plurality of first matching scores according to a grid matching result between the point cloud of the obstacle to be identified and each obstacle point cloud template in the plurality of obstacle point cloud templates.
12. The apparatus of any of claims 9-11, further comprising:
and the classification module is used for performing classification statistics on the point cloud information of each obstacle in the obstacle training set according to the type of the obstacle to obtain the obstacle information base corresponding to each type.
13. The apparatus of claim 12, further comprising:
the acquisition module is used for acquiring point cloud information of a plurality of preset obstacles marked with obstacle categories to generate an obstacle training set;
and the training module is used for training the classifier model according to the point cloud information of the preset obstacles in the obstacle training set.
14. The apparatus of claim 13, wherein the training module is specifically configured to:
respectively projecting the point cloud of each preset obstacle and each obstacle point cloud template in the plurality of obstacle point cloud templates on a length plane and a width plane;
carrying out mesh division on the projection of the point cloud of each preset obstacle and the projection of each obstacle point cloud template in the plurality of obstacle point cloud templates according to the same scale;
matching each grid in the projection of the point cloud of each preset obstacle with a corresponding grid in the projection of each obstacle point cloud template in the plurality of obstacle point cloud templates to obtain a grid matching result between the point cloud of each preset obstacle and each obstacle point cloud template in the plurality of obstacle point cloud templates;
obtaining a plurality of second matching scores according to a grid matching result between the point cloud of each preset obstacle and each obstacle point cloud template in the plurality of obstacle point cloud templates;
obtaining a feature vector of the point cloud of the corresponding preset obstacle according to the plurality of second matching scores corresponding to the point cloud of the preset obstacle;
and training a classifier model according to the feature vector of the point cloud of each preset obstacle and the corresponding category of the point cloud of the preset obstacle, so as to determine the classifier model.
15. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1-7 when executing the program.
16. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-7.
CN201710051916.3A 2017-01-20 2017-01-20 Obstacle identification method and device, computer equipment and readable medium Active CN106845416B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710051916.3A CN106845416B (en) 2017-01-20 2017-01-20 Obstacle identification method and device, computer equipment and readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710051916.3A CN106845416B (en) 2017-01-20 2017-01-20 Obstacle identification method and device, computer equipment and readable medium

Publications (2)

Publication Number Publication Date
CN106845416A CN106845416A (en) 2017-06-13
CN106845416B true CN106845416B (en) 2021-09-21

Family

ID=59119680

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710051916.3A Active CN106845416B (en) 2017-01-20 2017-01-20 Obstacle identification method and device, computer equipment and readable medium

Country Status (1)

Country Link
CN (1) CN106845416B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108733065B (en) * 2017-09-29 2021-06-04 北京猎户星空科技有限公司 Obstacle avoidance method and device for robot and robot
CN107734260A (en) * 2017-10-26 2018-02-23 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN109145489B (en) 2018-09-07 2020-01-17 百度在线网络技术(北京)有限公司 Obstacle distribution simulation method and device based on probability chart and terminal
CN111105435B (en) * 2018-10-25 2023-09-29 北京嘀嘀无限科技发展有限公司 Mark matching method and device and terminal equipment
CN110216664B (en) * 2019-04-30 2020-12-22 北京云迹科技有限公司 Obstacle identification method and device based on point cloud data
CN110974088B (en) * 2019-11-29 2021-09-24 深圳市杉川机器人有限公司 Sweeping robot control method, sweeping robot and storage medium
CN110807806B (en) * 2020-01-08 2020-04-14 中智行科技有限公司 Obstacle detection method and device, storage medium and terminal equipment
CN111631650B (en) * 2020-06-05 2021-12-03 上海黑眸智能科技有限责任公司 Indoor plan generating method, system and terminal based on obstacle height detection and sweeping robot
CN112347999B (en) * 2021-01-07 2021-05-14 深圳市速腾聚创科技有限公司 Obstacle recognition model training method, obstacle recognition method, device and system

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8165352B1 (en) * 2007-08-06 2012-04-24 University Of South Florida Reconstruction of biometric image templates using match scores
CN102222232A (en) * 2011-06-24 2011-10-19 常州锐驰电子科技有限公司 Multi-level rapid filtering and matching device and method for human faces
CN102779280B (en) * 2012-06-19 2014-07-30 武汉大学 Traffic information extraction method based on laser sensor
CN103236043B (en) * 2013-04-28 2015-10-28 北京农业信息技术研究中心 A kind of plant organ point cloud restoration method
US9349076B1 (en) * 2013-12-20 2016-05-24 Amazon Technologies, Inc. Template-based target object detection in an image
AU2014240213B2 (en) * 2014-09-30 2016-12-08 Canon Kabushiki Kaisha System and Method for object re-identification
CN106295460B (en) * 2015-05-12 2019-05-03 株式会社理光 The detection method and equipment of people
CN105574527B (en) * 2015-12-14 2019-03-29 北京工业大学 A kind of quick object detecting method based on local feature learning
CN106127153B (en) * 2016-06-24 2019-03-05 南京林业大学 The traffic sign recognition methods of Vehicle-borne Laser Scanning point cloud data
CN106295544B (en) * 2016-08-04 2019-05-28 山东师范大学 A kind of unchanged view angle gait recognition method based on Kinect
CN106199558A (en) * 2016-08-18 2016-12-07 宁波傲视智绘光电科技有限公司 Barrier method for quick

Also Published As

Publication number Publication date
CN106845416A (en) 2017-06-13

Similar Documents

Publication Publication Date Title
CN106845416B (en) Obstacle identification method and device, computer equipment and readable medium
CN106845412B (en) Obstacle identification method and device, computer equipment and readable medium
CN106709475B (en) Obstacle recognition method and device, computer equipment and readable storage medium
CN106951847B (en) Obstacle detection method, apparatus, device and storage medium
US11151363B2 (en) Expression recognition method, apparatus, electronic device, and storage medium
CN109145680B (en) Method, device and equipment for acquiring obstacle information and computer storage medium
CN106934347B (en) Obstacle identification method and device, computer equipment and readable medium
Cheon et al. Vision-based vehicle detection system with consideration of the detecting location
CN109633688B (en) Laser radar obstacle identification method and device
CN106648078B (en) Multi-mode interaction method and system applied to intelligent robot
CN108470174B (en) Obstacle segmentation method and device, computer equipment and readable medium
CN110135302B (en) Method, device, equipment and storage medium for training lane line recognition model
CN112949366B (en) Obstacle identification method and device
WO2022193515A1 (en) Devices, systems, methods, and media for adaptive augmentation for a point cloud dataset used for training
CN111539347B (en) Method and device for detecting target
CN108197318A (en) Face identification method, device, robot and storage medium
CN111645695B (en) Fatigue driving detection method and device, computer equipment and storage medium
CN110610127A (en) Face recognition method and device, storage medium and electronic equipment
CN111767831B (en) Method, apparatus, device and storage medium for processing image
CN112241667A (en) Image detection method, device, equipment and storage medium
US20190290493A1 (en) Intelligent blind guide method and apparatus
CN109145752B (en) Method, apparatus, device and medium for evaluating object detection and tracking algorithms
CN108319916A (en) Face identification method, device, robot and storage medium
CN117031491A (en) Map construction method and device, automatic navigation trolley and electronic equipment
Wu et al. Realtime single-shot refinement neural network with adaptive receptive field for 3D object detection from LiDAR point cloud

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant