CN106919908B - Obstacle identification method and device, computer equipment and readable medium - Google Patents

Obstacle identification method and device, computer equipment and readable medium Download PDF

Info

Publication number
CN106919908B
CN106919908B CN201710073031.3A CN201710073031A CN106919908B CN 106919908 B CN106919908 B CN 106919908B CN 201710073031 A CN201710073031 A CN 201710073031A CN 106919908 B CN106919908 B CN 106919908B
Authority
CN
China
Prior art keywords
obstacle
frame
frames
identified
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710073031.3A
Other languages
Chinese (zh)
Other versions
CN106919908A (en
Inventor
谢国洋
李晓晖
郭疆
王亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201710073031.3A priority Critical patent/CN106919908B/en
Publication of CN106919908A publication Critical patent/CN106919908A/en
Application granted granted Critical
Publication of CN106919908B publication Critical patent/CN106919908B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Abstract

The invention provides an obstacle identification method and device, computer equipment and a readable medium. The method comprises the following steps: acquiring information of obstacles to be identified of continuous N +1 frames around a current vehicle; according to the information of the obstacles to be recognized in each frame of the previous N frames, acquiring a first point cloud projection drawing of a point cloud layer with at least two heights in each frame on a horizontal plane, a first reflection information projection drawing of the recognized obstacles on the horizontal plane and a first duty ratio projection drawing of the recognized obstacles on the horizontal plane; and predicting a first obstacle class diagram of the (N + 1) th frame according to a pre-trained classifier model, a first obstacle class diagram of the obstacle to be recognized in each frame in the previous N frames in the horizontal plane, a preset weight of each frame in the previous N frames, at least two first point cloud projection diagrams, a first reflection information projection diagram and a first duty ratio projection diagram of each frame. The technical scheme of the invention can effectively improve the identification accuracy and the identification efficiency of the barrier to be identified.

Description

Obstacle identification method and device, computer equipment and readable medium
[ technical field ] A method for producing a semiconductor device
The invention relates to the technical field of automatic driving, in particular to an obstacle identification method and device, computer equipment and a readable medium.
[ background of the invention ]
In the existing automatic driving technology, the information output by the obstacle recognition to be recognized is used as the input of the control and planning information, so that the accurate and fast recognition of the obstacle to be recognized is a very critical technology.
In the prior art, a camera and a laser radar are generally adopted to identify an obstacle to be identified. The camera scheme can be applied to the scene with sufficient illumination and relatively stable environment. However, under the conditions of bad weather and disordered road environment, the vision of the camera scheme is always unstable, so that the acquired information of the obstacle to be identified is inaccurate. While lidar is very expensive, lidar solutions are very stable and safe in identifying obstacles to be identified. In the prior art, when a laser radar is used for identifying an obstacle to be identified, the type of the obstacle to be identified is judged according to the point cloud size and the local characteristics of the obstacle to be identified, which are acquired by the laser radar scanning a single frame of the obstacle to be identified. The scanning of the laser radar for one frame of point cloud generally refers to the scanning of the laser radar for 360 degrees in 1s for one circle of obstacles around the current vehicle; therefore, one obstacle to be recognized may be included in the point cloud of one frame, or a plurality of obstacles to be recognized may be included. Then judging whether the obstacle to be recognized is a person according to whether the local feature of the point cloud of the obstacle to be recognized is the head portrait of the person; and judging whether the obstacle to be identified is a bicycle or not according to whether the local characteristic of the point cloud of the obstacle to be identified is the bicycle head characteristic or not.
However, in the prior art, when detecting an obstacle to be recognized from a point cloud of the obstacle to be recognized in a single frame scanned by a laser radar, all the obstacles to be recognized are still, for example, pedestrians are easily recognized as a columnar object in the background, and thus, the accuracy of recognizing the obstacle to be recognized in a road is poor, and the recognition efficiency is low.
[ summary of the invention ]
The invention provides an obstacle identification method and device, computer equipment and a readable medium, which are used for improving the identification accuracy and identification efficiency of an obstacle to be identified in automatic driving.
The invention provides an obstacle identification method, which comprises the following steps:
acquiring information of obstacles to be identified of continuous N +1 frames around a current vehicle scanned by a laser radar;
according to the information of the obstacles to be recognized in each frame of the first N frames in the N +1 frames, acquiring a first point cloud projection diagram of a point cloud layer with at least two heights in each frame on a horizontal plane, a first reflection information projection diagram of the obstacles to be recognized in each frame on the horizontal plane, and a first duty ratio projection diagram of the obstacles to be recognized in each frame on the horizontal plane;
predicting the first obstacle category map of the (N + 1) th frame in the (N + 1) th frame according to a pre-trained classifier model, a pre-acquired first obstacle category map of the obstacle to be recognized in each frame in the previous N frames on the horizontal plane, a pre-set weight of each frame in the previous N frames, and at least two first point cloud projection maps, the first reflection information projection map and the first duty ratio projection map of each frame in the previous N frames.
Further optionally, in the method as described above, after predicting the first obstacle category map of the N +1 th frame in the N +1 th frame according to a pre-trained classifier model, a pre-obtained first obstacle category map of the obstacle to be identified in the horizontal plane in each of the N previous frames, a pre-set weight of each of the N previous frames, and at least two of the first point cloud projection map, the first reflection information projection map, and the first duty ratio projection map of each of the N previous frames, the method further includes:
and identifying the category of each obstacle to be identified in the point cloud of the (N + 1) th frame according to the first obstacle category map of the (N + 1) th frame and the information of the obstacle to be identified of the (N + 1) th frame.
Further optionally, in the method as described above, identifying a category of each obstacle to be identified in the point cloud of the N +1 th frame according to the first obstacle category map of the N +1 th frame and the information of the obstacle to be identified of the N +1 th frame specifically includes:
according to the first obstacle category map of the (N + 1) th frame, identifying the category of each obstacle to be identified in the point cloud of the obstacle to be identified of the (N + 1) th frame;
and judging whether more than two different categories are identified on the same obstacle to be identified in the point cloud of the obstacle to be identified in the (N + 1) th frame, if so, identifying the category of the obstacle to be identified according to the number of points respectively corresponding to more than two different categories in the point cloud of the obstacle to be identified.
Further optionally, in the method as described above, according to the information of the obstacle to be identified in each frame of the N +1 frames, obtaining a first point cloud projection view of a point cloud layer of at least two heights in each frame on a horizontal plane, a first reflection information projection view of the obstacle to be identified in each frame on the horizontal plane, and a first duty ratio projection view of the obstacle to be identified in each frame on the horizontal plane specifically includes:
acquiring point cloud layers with at least two heights parallel to the horizontal plane according to the point clouds of the obstacles to be identified in each frame of the previous N frames; projecting the point cloud layers with the at least two heights on the horizontal plane respectively to obtain at least two first point cloud projection drawings corresponding to each frame;
according to the reflection values of all points on the surface of the obstacle to be recognized in each frame of the previous N frames, identifying the reflection values of all points on the surface of the obstacle to be recognized in the projection of the point cloud of the obstacle to be recognized in each frame on the horizontal plane, and obtaining the first reflection information projection graph corresponding to each frame;
and acquiring the first duty ratio projection drawing of the point cloud of the obstacle to be identified in each frame on the horizontal plane according to the point cloud of the obstacle to be identified in each frame of the previous N frames.
Further optionally, in the method as described above, before predicting the first obstacle category map of the N +1 th frame in the N +1 th frame, according to a pre-trained classifier model, a pre-obtained first obstacle category map of the obstacle to be identified in the each frame in the previous N frames at the horizontal plane, a pre-set weight of each frame in the previous N frames, and at least two first point cloud projection maps, the first reflection information projection map, and the first duty ratio projection map of each frame in the previous N frames, the method further includes:
acquiring the first obstacle category map of the obstacle to be identified in each frame of the previous N frames on the horizontal plane;
further, acquiring the first obstacle category map of the obstacle to be identified in each of the previous N frames on the horizontal plane specifically includes:
acquiring the first obstacle category map of the obstacle to be identified in the 1 st frame at the horizontal plane from a static map;
predicting the first obstacle category diagram of the i +1 th frame according to a pre-trained classifier model, the first obstacle category diagram of the obstacle to be recognized in each frame in the previous i frames in the horizontal plane, and at least two first point cloud projection diagrams, the first reflection information projection diagram and the first duty ratio projection diagram of each frame in the previous i frames, which are obtained in advance; wherein i is an integer of 1-1 i-1.
Further optionally, in the method as described above, before predicting the first obstacle category map of the N +1 th frame in the N +1 th frame, according to a pre-trained classifier model, a pre-obtained first obstacle category map of the obstacle to be identified in the each frame in the previous N frames at the horizontal plane, a pre-set weight of each frame in the previous N frames, and at least two first point cloud projection maps, the first reflection information projection map, and the first duty ratio projection map of each frame in the previous N frames, the method further includes:
setting a weight W for a jth frame in the previous N framesjThe j +1 th frame sets a weight Wj+1Wherein W isj+1>Wj(ii) a Wherein j is an integer which is more than or equal to 1 and less than or equal to N; or
Setting a weight Q for the 1 st frame to the int (N/2) frame in the first N frames, and setting a weight R for the int (N/2) to the Nth frame in the first N frames, wherein the R is greater than the Q.
Further optionally, in the method as described above, before predicting the first obstacle category map of the N +1 th frame in the N +1 th frame, according to a pre-trained classifier model, a pre-obtained first obstacle category map of the obstacle to be identified in the each frame in the previous N frames at the horizontal plane, a pre-set weight of each frame in the previous N frames, and at least two first point cloud projection maps, the first reflection information projection map, and the first duty ratio projection map of each frame in the previous N frames, the method further includes:
collecting information of a plurality of groups of continuous N +1 frames of preset obstacles with known categories to generate an obstacle training set; the information of the preset barrier of each frame comprises a point cloud of the preset barrier and a reflection value of each point of the preset barrier;
and training the classifier model according to the information of the preset obstacles of a plurality of groups of continuous N +1 frames in the obstacle training set.
Further optionally, in the method as described above, training the classifier model according to the information of the preset obstacles in the plurality of groups of consecutive N +1 frames in the obstacle training set specifically includes:
respectively acquiring a second point cloud projection drawing of point cloud layers with at least two heights of the preset obstacles in each frame of each group, a second reflection information projection drawing of the preset obstacles in each frame in the horizontal plane and a second duty ratio projection drawing of the preset obstacles in each frame in the horizontal plane according to the information of the preset obstacles in each frame of the first N frames in the N +1 frames of each group in the obstacle training set;
training the classifier model according to a second obstacle category graph of the preset obstacle in the horizontal plane in each frame of the previous N frames of each group, preset weight of each frame in the previous N frames of each group, at least two second point cloud projection graphs, the second reflection information projection graph and the second duty ratio projection graph of each frame in the previous N frames, and known categories of the preset obstacle corresponding to each group, so as to determine the classifier model.
The present invention also provides an obstacle recognition apparatus, the apparatus including:
the obstacle information acquisition module is used for acquiring information of obstacles to be identified, which are obtained by scanning continuous N +1 frames around the current vehicle by the laser radar;
a parameter information obtaining module, configured to obtain, according to information of the obstacle to be identified in each frame of a previous N frames of the N +1 frames, a first point cloud projection view of a point cloud layer of at least two heights in each frame on a horizontal plane, a first reflection information projection view of the obstacle to be identified in each frame on the horizontal plane, and a first duty ratio projection view of the obstacle to be identified in each frame on the horizontal plane;
a prediction module, configured to predict the first obstacle category map of an N +1 th frame in the N +1 th frame according to a pre-trained classifier model, a pre-obtained first obstacle category map of the obstacle to be recognized in each frame in the previous N frames on the horizontal plane, a pre-set weight of each frame in the previous N frames, and at least two first point cloud projection maps, the first reflection information projection map, and the first duty ratio projection map of each frame in the previous N frames.
Further optionally, in the apparatus as described above, further comprising:
and the obstacle identification module is used for identifying the category of each obstacle to be identified in the point cloud of the (N + 1) th frame according to the first obstacle category map of the (N + 1) th frame and the information of the obstacle to be identified of the (N + 1) th frame.
Further optionally, in the apparatus as described above, the obstacle identification module is specifically configured to:
according to the first obstacle category map of the (N + 1) th frame, identifying the category of each obstacle to be identified in the point cloud of the obstacle to be identified of the (N + 1) th frame;
and judging whether more than two different categories are identified on the same obstacle to be identified in the point cloud of the obstacle to be identified in the (N + 1) th frame, if so, identifying the category of the obstacle to be identified according to the number of points respectively corresponding to more than two different categories in the point cloud of the obstacle to be identified.
Further optionally, in the apparatus described above, the parameter information obtaining module is specifically configured to:
acquiring point cloud layers with at least two heights parallel to the horizontal plane according to the point clouds of the obstacles to be identified in each frame of the previous N frames; projecting the point cloud layers with the at least two heights on the horizontal plane respectively to obtain at least two first point cloud projection drawings corresponding to each frame;
according to the reflection values of all points on the surface of the obstacle to be identified in each frame of the previous N frames, identifying the reflection values of all points on the surface of the obstacle to be identified in the projection of the point cloud of the obstacle to be identified in each frame on the horizontal plane, and obtaining the first reflection information projection drawing corresponding to each frame;
and acquiring the first duty ratio projection drawing of the point cloud of the obstacle to be identified in each frame on the horizontal plane according to the point cloud of the obstacle to be identified in each frame of the previous N frames.
Further optionally, in the apparatus as described above, further comprising:
an obstacle category obtaining module, configured to obtain the first obstacle category map of the obstacle to be identified in the horizontal plane in each of the previous N frames;
further, the obstacle category acquiring module is specifically configured to:
acquiring the first obstacle category map of the obstacle to be identified in the 1 st frame at the horizontal plane from a static map;
predicting the first obstacle category diagram of the i +1 th frame according to a pre-trained classifier model, the first obstacle category diagram of the obstacle to be recognized in each frame in the previous i frames in the horizontal plane, and at least two first point cloud projection diagrams, the first reflection information projection diagram and the first duty ratio projection diagram of each frame in the previous i frames, which are obtained in advance; wherein i is an integer of 1-1 i-1.
Further optionally, in the apparatus as described above, further comprising:
a weight setting module for setting a weight W for the jth frame in the previous N framesjThe j +1 th frame sets a weight Wj+1Wherein W isj+1>Wj(ii) a Wherein j is an integer which is more than or equal to 1 and less than or equal to N; or
Setting a weight Q for the 1 st frame to the int (N/2) frame in the first N frames, and setting a weight R for the int (N/2) to the Nth frame in the first N frames, wherein the R is greater than the Q.
Further optionally, in the apparatus as described above, further comprising:
the acquisition module is used for acquiring information of a plurality of groups of continuous N +1 frames of preset obstacles of known types to generate an obstacle training set; the information of the preset barrier of each frame comprises a point cloud of the preset barrier and a reflection value of each point of the preset barrier;
and the training module is used for training the classifier model according to the information of the preset obstacles of a plurality of groups of continuous N +1 frames in the obstacle training set.
Further optionally, in the apparatus as described above, the training module is specifically configured to:
respectively acquiring a second point cloud projection drawing of point cloud layers with at least two heights of the preset obstacles in each frame of each group, a second reflection information projection drawing of the preset obstacles in each frame in the horizontal plane and a second duty ratio projection drawing of the preset obstacles in each frame in the horizontal plane according to the information of the preset obstacles in each frame of the first N frames in the N +1 frames of each group in the obstacle training set;
training the classifier model according to a second obstacle category graph of the preset obstacle in the horizontal plane in each frame of the previous N frames of each group, preset weight of each frame in the previous N frames of each group, at least two second point cloud projection graphs, the second reflection information projection graph and the second duty ratio projection graph of each frame in the previous N frames, and known categories of the preset obstacle corresponding to each group, so as to determine the classifier model.
The invention also provides a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the obstacle identification method as described above when executing the program.
The invention also provides a computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the obstacle identification method as described above.
According to the obstacle identification method and device, the computer equipment and the readable medium, the information of the obstacle to be identified of continuous N +1 frames around the current vehicle is scanned by the laser radar; according to the information of the obstacles to be recognized in each frame of the first N frames in the N +1 frames, acquiring a first point cloud projection drawing of a point cloud layer with at least two heights in each frame on a horizontal plane, a first reflection information projection drawing of the obstacles to be recognized in each frame on the horizontal plane, and a first duty ratio projection drawing of the obstacles to be recognized in each frame on the horizontal plane; and predicting a first obstacle category diagram of an N +1 th frame in the N +1 frames according to a pre-trained classifier model, a first obstacle category diagram of an obstacle to be recognized in each frame in the previous N frames in a horizontal plane, a preset weight of each frame in the previous N frames, at least two first point cloud projection diagrams, a first reflection information projection diagram and a first duty ratio projection diagram of each frame in the previous N frames. Compared with the prior art that the type of the obstacle to be recognized is detected through the point cloud of the obstacle to be recognized of a single frame, the technical scheme of the invention recognizes the type of the obstacle to be recognized according to the information of the obstacle to be recognized of multiple frames, and the recognition accuracy of the obstacle to be recognized can be effectively improved due to the fact that the information of the obstacle to be recognized of multiple frames is referred to, so that the recognition efficiency of the obstacle to be recognized can be effectively improved.
[ description of the drawings ]
Fig. 1 is a flowchart of an obstacle identification method according to an embodiment of the present invention.
Fig. 2 is a structural diagram of a first obstacle recognition device according to an embodiment of the present invention.
Fig. 3 is a structural diagram of a second obstacle recognition device according to an embodiment of the present invention.
Fig. 4 is a block diagram of a computer apparatus of the present invention.
[ detailed description ] embodiments
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in detail with reference to the accompanying drawings and specific embodiments.
Fig. 1 is a flowchart of an obstacle identification method according to an embodiment of the present invention. As shown in fig. 1, the obstacle identification method of this embodiment may specifically include the following steps:
100. acquiring information of obstacles to be identified of continuous N +1 frames around a current vehicle scanned by a laser radar;
the obstacle recognition method of the embodiment is applied to the technical field of automatic driving. In automatic driving, a vehicle is required to be capable of automatically identifying obstacles in a road so as to make a decision and control in time during vehicle driving, and the vehicle can safely drive conveniently. The execution subject of the obstacle recognition method of the embodiment may be an obstacle recognition device, which may be integrated by a plurality of modules, and the obstacle recognition device may be specifically provided in an autonomous vehicle to control the autonomous vehicle.
The information of the obstacle to be identified in this embodiment may be obtained by scanning with a laser radar. The specifications of the laser radar may be 16-wire, 32-wire, 64-wire, etc. Wherein a higher number of lines indicates a higher specific energy density of the lidar. In this embodiment, the lidar mounted on the current vehicle rotates 360 every second, and scans information of a circle of obstacles to be recognized around the current vehicle, which is information of a frame of obstacles to be recognized. The information of the obstacle to be recognized in the present embodiment may include a point cloud of the obstacle to be recognized and a reflection value of the obstacle to be recognized. There may be one or more obstacles to be identified around the present vehicle. After the laser radar scans, the centroid position of the current vehicle can be used as the origin of a coordinate system, two directions parallel to the horizontal plane are taken as the x direction and the y direction respectively, the two directions are taken as the length direction and the width direction, and the direction perpendicular to the ground is taken as the z direction, and the direction is taken as the height direction. The obstacle to be identified may then be identified in the coordinate system according to the relative position and distance of each point in the point cloud of the obstacle to be identified to the origin. Therefore, in the point cloud of the obstacle to be recognized in each frame, the point cloud of each obstacle to be recognized can be obtained according to the relative position of each point in each obstacle to be recognized and the current vehicle. In addition, the laser radar can also detect the reflection value of each point in each obstacle to be recognized. In practical application, the coordinate system can also use the centroid position of the laser radar as the origin, and the other directions are unchanged. The value of N in this embodiment may be taken according to actual requirements, for example, 8, 10, or other values may be taken.
101. According to the information of the obstacles to be recognized in each frame of the first N frames in the N +1 frames, acquiring a first point cloud projection drawing of a point cloud layer with at least two heights in each frame on a horizontal plane, a first reflection information projection drawing of the obstacles to be recognized in each frame on the horizontal plane, and a first duty ratio projection drawing of the obstacles to be recognized in each frame on the horizontal plane;
in the method for identifying the obstacle according to the embodiment, the obstacle type of the (N + 1) th frame is predicted mainly by using some parameter information of the previous N frames, so that the obstacle is identified. However, since prediction cannot be performed by directly using a three-dimensional point cloud image in the information of the obstacle to be recognized, in this embodiment, the three-dimensional information is converted into two-dimensional information, and the obstacle category of the (N + 1) th frame is predicted by using the two-dimensional information. For example, the two-dimensional information in this embodiment may include a first point cloud projection diagram of a point cloud layer with at least two heights in each frame in a horizontal plane, a first reflection information projection diagram of an obstacle to be identified in each frame in the horizontal plane, and a first duty ratio projection diagram of the obstacle to be identified in each frame in the horizontal plane.
For example, the step 101 may specifically include the following steps:
(a1) acquiring point cloud layers with at least two heights parallel to a horizontal plane according to point clouds of obstacles to be identified in each frame of the previous N frames; respectively projecting the point cloud layers with at least two heights on a horizontal plane to obtain at least two first point cloud projection images corresponding to each frame;
for example, the height direction of the point cloud of the obstacle to be identified scanned by the laser radar starts from a negative height threshold value to a certain positive height threshold value. For example, if the centroid of the current vehicle is 1.3m from the ground, the height of the ground is-1.3 m, and the forward height threshold may be set according to the maximum height of the obstacle in the road in the actual application, for example, may be +5m, or other values. Then, in the point cloud of the obstacle to be identified of each frame, a point cloud layer of at least two heights parallel to the horizontal plane may be taken. If the point cloud layer close to the horizontal plane includes fewer features of the obstacle to be identified, the point cloud layer close to the horizontal plane can be taken upwards as much as possible, and the point cloud layer close to the horizontal plane can also be avoided, for example, the point cloud layer from-1.2 m to +1.0m, the point cloud layer from-1.2 m to +2.0m, the point cloud layer from-1.2 m to +3.0m, and the point cloud layer from-1.2 m to +5.0m can be taken. The height of the point cloud layer can be selected according to the height characteristics of obstacles in actually researched roads, for example, roads with more pedestrians, and the lowest point cloud layer can be selected according to the height of pedestrians; secondly, the height of the upper layer of point cloud layer can be set according to the height of a car or a bicycle, and the like, and the height of each layer of point cloud layer can be set by analogy. After point cloud layers at least two heights are obtained, respectively projecting the point cloud layers on a horizontal plane; namely, converting a three-dimensional point cloud layer into a two-dimensional point cloud projection drawing. Projecting the point cloud layer at each height to obtain a first point cloud projection picture; and the point cloud layers with at least two heights can obtain at least two first point cloud projection drawings.
(a2) According to the reflection values of all points on the surface of the obstacle to be recognized in each frame of the previous N frames, identifying the reflection values of all points on the surface of the obstacle to be recognized in the projection of the point cloud of the obstacle to be recognized in each frame on the horizontal plane, and obtaining a first reflection information projection graph corresponding to each frame;
the laser radar can scan and acquire the point cloud of the obstacle to be identified in each frame, and can detect the reflection value of each point in the obstacle to be identified in the current frame. Because the laser radar set in the automatically-driven vehicle can acquire the road condition in front, the laser radar is usually set to be higher than the top of the vehicle, so that all the obstacles to be identified around the current vehicle can be scanned in an all-around manner. Therefore, during scanning, the laser radar can detect the reflection value of each position where each obstacle to be identified can be scanned. In general, when the height of the lidar is high enough, the reflection value of any point on the surface of the obstacle to be identified can be scanned theoretically, that is, any position other than the bottom of the obstacle to be identified toward the ground should be scanned. However, when the height of the laser lightning belt is not high enough, the laser radar may not scan the surface of the obstacle to be recognized, which faces away from the laser radar, but may scan at least each point on the upper surface of the obstacle to be recognized, and at this time, the point cloud of the obstacle to be recognized in each frame may identify the reflection value of each point on the upper surface of the obstacle to be recognized in the projection of the horizontal plane. That is, only the projection of the point corresponding to the maximum value at the height of each obstacle to be recognized may be identified in the first reflection information projection view. After the above processing, for each frame, a corresponding first reflection information projection diagram can be obtained.
(a3) And acquiring a first duty ratio projection drawing of the point cloud of the obstacle to be identified in each frame in the horizontal plane according to the point cloud of the obstacle to be identified in each frame of the previous N frames.
For each frame, if the number of the obstacles to be identified included around the current vehicle scanned by the laser radar is multiple, the point clouds of all the obstacles to be identified, which are obtained by scanning the laser radar, can be projected on a horizontal plane, so that a first duty ratio projection diagram of the point clouds of the obstacles to be identified in the frame on the horizontal plane is obtained. For each frame, a corresponding first duty cycle projection graph can be obtained.
102. And predicting a first obstacle category diagram of an N +1 th frame in the N +1 frames according to a pre-trained classifier model, a first obstacle category diagram of an obstacle to be recognized in each frame in the previous N frames in a horizontal plane, a preset weight of each frame in the previous N frames, at least two first point cloud projection diagrams, a first reflection information projection diagram and a first duty ratio projection diagram of each frame in the previous N frames.
The principle of obstacle recognition in this embodiment is to predict the first obstacle category map of the N +1 th frame in the N +1 th frame based on the first obstacle category map of the obstacle to be recognized in the horizontal plane in each frame of the previous N frames, the weight of each frame in the previous N frames, and the at least two first point cloud projection maps, the first reflection information projection map, and the first duty ratio projection map of each frame in the previous N frames, and in consideration of the continuity between the frames, the classifier model of this embodiment must be a model capable of processing consecutive frames, for example, the classifier model may be a Recurrent Neural Network model, such as a reflex Neural Network model or an L ong Short Term Neural Network model, in this embodiment, the classifier model may be a recursive Neural Network model, for example, a vehicle model that predicts the first obstacle category map of the obstacle to be recognized in the horizontal plane in each frame of the previous N frames, and the at least two first point cloud projection maps, the first reflection information projection map, and the first duty ratio projection map of each frame in the previous N frames, and the at least two first point cloud projection maps, the weight of each frame in the previous N frames of the previous N frames, so that the classifier model predicts the first obstacle category map of the vehicle as the current obstacle category map of the first obstacle category of the vehicle, the vehicle model, the vehicle classification of the vehicle model, the vehicle classification frame is a final obstacle to be recognized as the predicted obstacle of the current obstacle category of the current obstacle recognition frame, and the vehicle model of the vehicle.
Alternatively, in the present embodiment, the category of the obstacle to be recognized may be classified into a pedestrian, a bicycle, a car, or other categories. When an obstacle is identified, the category of the uncertain obstacle is identified as another category. But also to gradually increase the classes of obstacles, depending on the new vehicles present in the road in the actual application. In the first obstacle category map, different obstacle categories may be represented by different colors, different shapes of dots, and the like.
Optionally, before the step 102, the method may further include: and acquiring a first obstacle category map of the obstacle to be identified in each frame in the previous N frames in the horizontal plane.
The step of "obtaining a first obstacle category map of the obstacle to be identified in each frame of the previous N frames on the horizontal plane" may specifically include the following steps:
(b1) acquiring a first obstacle category map of an obstacle to be identified in a1 st frame on a horizontal plane from a static map;
(b2) predicting a first obstacle category diagram of an i +1 th frame according to a pre-trained classifier model, a first obstacle category diagram of an obstacle to be recognized in each frame in the previous i frames on a horizontal plane, at least two first point cloud projection diagrams, a first reflection information projection diagram and a first duty ratio projection diagram of each frame in the previous i frames, which are obtained in advance; wherein i is an integer of 1-1 i-1.
In this embodiment, since there is no first obstacle category map before the 1 st frame, the technical solution of this embodiment cannot be adopted to predict the category map of the obstacle of the 1 st frame according to the previous first obstacle category map. Therefore, the first obstacle category map of the 1 st frame of the present embodiment may be acquired from the static map. Starting from the 2 nd frame, predicting the first obstacle category map of the 2 nd frame by using the first obstacle category map of the 1 st frame, at least two first point cloud projection maps corresponding to the 1 st frame, a first reflection information projection map corresponding to the 1 st frame and a first duty ratio projection map corresponding to the 1 st frame; similarly, the first obstacle category map of the 3 rd frame can be predicted according to the at least two first point cloud projection maps corresponding to the 1 st frame, the first reflection information projection map corresponding to the 1 st frame and the first duty ratio projection map corresponding to the 1 st frame, the at least two first point cloud projection maps corresponding to the 2 nd frame, the first reflection information projection map corresponding to the 2 nd frame and the first duty ratio projection map corresponding to the 2 nd frame, and the first obstacle category maps of the 1 st frame and the 2 nd frame; by analogy, the first obstacle category map of the 4 th frame can be predicted according to the information of the 1 st to 3 rd frames until the first obstacle category map of the N +1 th frame is predicted according to the information of the 1 st to N th frames. Therefore, the type of the obstacle to be identified around the current vehicle can be determined, and the laser radar does not need to continuously scan and acquire the point cloud of the obstacle to be identified of the (N + 2) th frame.
It should be noted that, in the above embodiment, in the process of acquiring the first obstacle category map of the horizontal plane of the obstacle to be identified in each frame of the previous N frames, the weight of each frame is not considered, and in a specific prediction process, the weight of each frame may be set to be an equal value. But in predicting the first obstacle category map for the (N + 1) th frame in step 102, the weights of each of the previous N frames need to be considered.
In addition, in practical application, when the lidar starts to scan, the previous frames may have unsatisfactory scanning results, but there may also be frames, for example, there may actually be some frames before the 1 st frame in the consecutive N +1 frames in this embodiment, at this time, the above technical solution of this embodiment may also be adopted, and the first obstacle category map of the 1 st frame may be predicted according to some actually existing frames existing before the acquired 1 st frame.
According to the obstacle identification method, the information of the obstacle to be identified of continuous N +1 frames around the current vehicle scanned by the laser radar is obtained; according to the information of the obstacles to be recognized in each frame of the first N frames in the N +1 frames, acquiring a first point cloud projection drawing of a point cloud layer with at least two heights in each frame on a horizontal plane, a first reflection information projection drawing of the obstacles to be recognized in each frame on the horizontal plane, and a first duty ratio projection drawing of the obstacles to be recognized in each frame on the horizontal plane; and predicting a first obstacle category diagram of an N +1 th frame in the N +1 frames according to a pre-trained classifier model, a first obstacle category diagram of an obstacle to be recognized in each frame in the previous N frames in a horizontal plane, a preset weight of each frame in the previous N frames, at least two first point cloud projection diagrams, a first reflection information projection diagram and a first duty ratio projection diagram of each frame in the previous N frames. Compared with the prior art that the type of the obstacle to be recognized is detected through the point cloud of the obstacle to be recognized of a single frame, the type of the obstacle to be recognized is recognized according to the information of the obstacle to be recognized of multiple frames, and due to the fact that the information of the obstacle to be recognized of the multiple frames is referred to at the same time, the recognition accuracy of the obstacle to be recognized can be effectively improved, and therefore the recognition efficiency of the obstacle to be recognized can be effectively improved.
Further optionally, on the basis of the technical solution of the embodiment shown in fig. 1, before step 102, "predict the first obstacle category map of the N +1 th frame in the N +1 th frame according to the pre-trained classifier model, the first obstacle category map of the obstacle to be recognized in each frame in the previous N frames obtained in advance on the horizontal plane, the preset weight of each frame in the previous N frames, and the at least two first point cloud projection maps, the first reflection information projection map, and the first duty ratio projection map of each frame in the previous N frames", the following steps may also be included: and setting weight for each frame in the previous N frames. For example, the following two modes can be specifically included:
the first mode is as follows: setting weight W for jth frame in previous N framesjThe j +1 th frame sets a weight Wj+1Wherein W isj+1>Wj(ii) a Wherein j is an integer of 1-N;
the second mode is as follows: setting a weight Q for the 1 st frame to the int (N/2) th frame in the first N frames, setting a weight R for the int (N/2) th to the Nth frames in the first N frames, wherein R is larger than Q.
In the first method, the frame weight is gradually increased as the frame number increases. That is, the closer to the N +1 th frame to be predicted, the greater the weight occupied in prediction. In the second way, in the previous N frames, the weights of the 1 st frame to the int (N/2) th frame are the same, and the weights of the int (N/2) th frame to the nth frame are the same, but the weights of the int (N/2) th frame to the nth frame near the predicted N +1 th frame are greater than the weights of the 1 st frame to the int (N/2) th frame, for example, R may be greater than or equal to 2Q in this embodiment. In practical application, R may be greater than or equal to 3Q or 1.5Q, or others, according to requirements. In short, it is sufficient to ensure that the closer to the frame to be predicted the greater the weight is, so that the more accurate the prediction of the frame to be predicted can be made.
Further optionally, on the basis of the technical solution of the embodiment shown in fig. 1, after step 102 "predicting the first obstacle category map of the N +1 th frame in the N +1 th frame according to the pre-trained classifier model, the first obstacle category map of the obstacle to be recognized in each frame in the previous N frames obtained in advance on the horizontal plane, the preset weight of each frame in the previous N frames, and the at least two first point cloud projection maps, the first reflection information projection map, and the first duty ratio projection map of each frame in the previous N frames", the following steps may also be included: and identifying the type of each obstacle to be identified in the point cloud of the (N + 1) th frame according to the first obstacle type image of the (N + 1) th frame and the information of the obstacle to be identified of the (N + 1) th frame.
Since the first obstacle category map finally obtained in step 102 of the above embodiment is two-dimensional, it is not possible to accurately identify each obstacle to be identified. Therefore, when the first obstacle category map of the (N + 1) th frame is obtained through prediction, the first obstacle category map can be converted into the point cloud of the obstacle to be recognized of the (N + 1) th frame. Since the first obstacle category map is in the xy plane, which is two-dimensional, the point cloud of the obstacle to be recognized of the N +1 th frame is in the xyz space, which is three-dimensional. In this way, the category of the obstacle under the coordinates can be easily mapped to the point cloud of the obstacle to be recognized in the N +1 th frame in the xyz space according to the xy coordinates in the first obstacle category map. In the conversion process, the types of the obstacles with the same z coordinate at the points with the same xy in the point cloud of the obstacle to be recognized in the (N + 1) th frame can be considered to be the same. For example, the step may specifically include the following steps:
(c1) according to the first obstacle category map of the (N + 1) th frame, identifying the category of each obstacle to be identified in the point cloud of the obstacle to be identified of the (N + 1) th frame;
(c2) and judging whether the same obstacle to be recognized in the point cloud of the obstacle to be recognized in the (N + 1) th frame is marked with more than two different categories, if so, marking the category of the obstacle to be recognized according to the number of points respectively corresponding to the more than two different categories in the point cloud of the obstacle to be recognized.
The point cloud of the obstacle to be identified of the (N + 1) th frame is in a three-dimensional space. In three-dimensional space, each obstacle to be identified is independent and is usually easily identified. Therefore, according to the first obstacle category map of the (N + 1) th frame, after the categories of the obstacles to be recognized are identified in the point clouds of the obstacles to be recognized of the (N + 1) th frame, whether more than two different categories are identified for the same obstacle to be recognized in the point clouds of the obstacles to be recognized of the (N + 1) th frame with the categories of the obstacles identified can be judged, and if so, the categories of the obstacles to be recognized can be identified according to the number of points respectively corresponding to the more than two different categories in the point clouds of the obstacles to be recognized. For example, in the point cloud of the same obstacle to be recognized, there are 500 points of category 1, and 20 points of category 2 are also identified therein, and at this time, the number of the points of category 1 is far greater than that of the points of category 2, and it can be considered that the points identifying category 2 are all noise points, at this time, category 2 identified in category 1 is removed, and the obstacle to be recognized is recognized as category 1. When a plurality of categories are included, the category of the obstacle to be recognized is identified as the category with the largest number of points according to the principle that a small number of services are majority.
However, sometimes the traffic condition is not good, for example, when the traffic is crowded, some obstacles to be recognized may be close to each other, and at this time, when the point cloud of the obstacle to be recognized of the N +1 th frame of the category of the identified obstacle has more than two different categories of the same obstacle to be recognized, it may also be determined whether the number of points of each category exceeds the threshold value of the number of points, respectively. The point number threshold value of the present embodiment may be set to the minimum value of points that can independently act as one obstacle. When the number of points in a certain category exceeds the threshold number of points, the points identified as that category can be considered to constitute an independent obstacle; even though the number may be lower than other classes of points that are close together, the obstacles of that class are independent. Otherwise, when the number of the points identified as a certain category is smaller than the threshold value of the number of the points, the points of the category can be considered as noise points, and the points of the category can be removed during verification. Through the steps (c1) and (c2), the first obstacle category map of the (N + 1) th frame in the N +1 frames obtained in the step 102 can be subjected to post-verification processing, so that the identification accuracy of the obstacle to be identified is further enhanced, and the identification efficiency of the obstacle to be identified is further effectively improved.
Further optionally, on the basis of the technical solution of the foregoing embodiment, before the step 102 "predicting the first obstacle category map of the N +1 th frame in the N +1 th frame according to the pre-trained classifier model, the first obstacle category map of the obstacle to be recognized in each frame of the previous N frames obtained in advance on the horizontal plane, the preset weight of each frame of the previous N frames, and the at least two first point cloud projection maps, the first reflection information projection map, and the first duty ratio projection map of each frame of the previous N frames," the method may further include the following steps:
(d1) collecting information of a plurality of groups of continuous N +1 frames of preset obstacles with known categories to generate an obstacle training set; similarly, the information of the preset obstacle of each frame of the embodiment may include a point cloud of the preset obstacle and a reflection value of each point of the preset obstacle;
(d2) and training a classifier model according to the information of the preset obstacles of a plurality of groups of continuous N +1 frames in the obstacle training set.
In this embodiment, the number of groups of the information of the preset obstacles of the known category included in the obstacle training set may be many, for example, more than 5000 or more than ten thousand or more, and the larger the number of groups of the information of the preset obstacles included in the obstacle training set is, the more accurate the parameters of the determined classifier model are when the classifier model is trained, the more accurate the subsequent identification of the category of the obstacle to be identified according to the classifier model is. The information of each group of preset obstacles may include information of preset obstacles of consecutive N +1 frames. In this way, the classifier model can be trained according to the information of the preset obstacles of a plurality of groups of continuous N +1 frames.
For example, the step (d2) may specifically include the following steps:
(e1) respectively acquiring a second point cloud projection image of point cloud layers with at least two heights of a preset obstacle in each group of frames on a horizontal plane, a second reflection information projection image of the preset obstacle in each group of frames on the horizontal plane and a second duty ratio projection image of the preset obstacle in each group of frames on the horizontal plane according to the preset obstacle information in each frame of the first N frames in the N +1 frames in each group of the obstacle training set;
(e2) and training a classifier model according to a second obstacle class diagram of a preset obstacle on a horizontal plane in each group of pre-acquired frames of the previous N frames, a preset weight of each frame in each group of frames, at least two second point cloud projection diagrams, a second reflection information projection diagram and a second duty ratio projection diagram of each frame in the previous N frames and the known class of the preset obstacle corresponding to each group, so as to determine the classifier model.
In this embodiment, when training the classifier model, the classifier model is trained through the steps (e1) and (e2) by using the information of the obstacles in each group of N +1 frames in the obstacle training set, and the classifier model can be finally determined through multiple times of training. Step (e1) of training the classifier model using the information of the obstacles in each group of N +1 frames is the same as the implementation process of step 101, and reference may be made to the description of step 101 in the above embodiments for details, which is not repeated herein. In step (e2), the second obstacle category map of the N +1 th frame corresponding to each group may be predicted according to a second obstacle category map of the preset obstacle in the horizontal plane in each frame of the previous N frames acquired in advance in each group, a preset weight of each frame in the previous N frames in each group, at least two second point cloud projection maps of each frame in the previous N frames, the second reflection information projection map, and the second duty ratio projection map. The specific process is the same as step 102 in the above embodiment, and reference may be made to the description of the above embodiment for details, which are not repeated herein. The preset setting manner of the weight of each frame in the previous N frames of each group may refer to the description of the above embodiments, and is not described herein again. It should be noted that the weight of each frame in the previous N frames in each group may be the same as or different from the weight of the corresponding frame in the other group.
And then, according to the known type of the preset obstacles in the group and the point cloud of the preset obstacles in the (N + 1) th frame, projecting the point cloud of the preset obstacles in the (N + 1) th frame to a horizontal plane (xy plane) according to the type of each obstacle, and obtaining a second obstacle type map of the (N + 1) th frame. And comparing the predicted second obstacle class diagram of the (N + 1) th frame with the obstacle class diagram of the (N + 1) th frame projected according to the known class, and when the predicted second obstacle class diagram of the (N + 1) th frame is different from the projected obstacle class diagram of the (N + 1) th frame, adjusting parameters of the classifier model, and performing new training to enable the predicted second obstacle class diagram of the (N + 1) th frame to be the same as the projected obstacle class diagram of the (N + 1) th frame.
Or the predicted second obstacle category map of the (N + 1) th frame corresponding to the group can be converted into the point cloud of the preset obstacles of the (N + 1) th frame of the group, so that the categories of the preset obstacles can be displayed in the point cloud of the preset obstacles of the (N + 1) th frame in a three-dimensional manner more clearly. And then, the predicted known classes of the preset obstacles in the classes of the preset obstacles are compared, and when the predicted known classes of the preset obstacles are different, the parameters of the classifier model can be adjusted, and the predicted classes of the preset obstacles are the same as the known classes of the preset obstacles through new training.
The classifier model is trained through the information of the preset obstacles of the plurality of groups of continuous N +1 frames, so that the parameters of the classifier model can be determined, and the classifier model can be determined. In this way, the obstacle to be recognized can be recognized by using the trained classifier model according to step 100-102.
The second obstacle category map of the obstacle on the horizontal plane is preset in each frame of the previous N frames obtained in step (e2), which is the same as the implementation principle of "obtaining the first obstacle category map of the obstacle to be identified on the horizontal plane in each frame of the previous N frames" in the above embodiment, and details can refer to the description of the above related embodiments, and are not repeated here.
By adopting the obstacle identification method of the embodiment, after the automatic driving vehicle scans the point cloud of the obstacle to be identified through the laser radar, the obstacle to be identified can be identified according to the obstacle identification method, and the driving of the vehicle can be further controlled according to the type of the obstacle, for example, the vehicle is controlled to avoid the obstacle, so that the driving safety of the automatic driving vehicle is effectively improved.
Compared with the prior art that the type of the obstacle to be recognized is detected through the point cloud of the obstacle to be recognized of a single frame, the type of the obstacle to be recognized is recognized according to the information of the obstacle to be recognized of multiple frames.
Fig. 2 is a structural diagram of a first obstacle recognition device according to an embodiment of the present invention. As shown in fig. 2, the obstacle identification device of the present embodiment may specifically include: an obstacle information acquisition module 10, a parameter information acquisition module 11 and a prediction module 12.
The obstacle information acquisition module 10 is configured to acquire information of obstacles to be identified, which are obtained by scanning, by a laser radar, consecutive N +1 frames around a current vehicle; the parameter information acquisition module 11 is configured to acquire, according to information of an obstacle to be identified in each frame of the first N frames in the N +1 frames acquired by the obstacle information acquisition module 10, a first point cloud projection view of a point cloud layer of at least two heights in each frame on a horizontal plane, a first reflection information projection view of the obstacle to be identified in each frame on the horizontal plane, and a first duty ratio projection view of the obstacle to be identified in each frame on the horizontal plane; the prediction module 12 is configured to predict the first obstacle category map of the (N + 1) th frame in the N +1 frames according to a pre-trained classifier model, a first obstacle category map of an obstacle to be recognized in each frame in the previous N frames acquired in advance on a horizontal plane, a preset weight of each frame in the previous N frames, and at least two first point cloud projection maps, a first reflection information projection map, and a first duty ratio projection map of each frame in the previous N frames acquired by the parameter information acquisition module 11.
The obstacle identification device of this embodiment identifies the obstacle to be identified by using the module, and the implementation principle and the technical effect of the related method embodiment are the same, so that reference may be made to the description of the related method embodiment in detail, and details are not repeated here.
Fig. 3 is a structural diagram of a second obstacle recognition device according to an embodiment of the present invention. As shown in fig. 3, the obstacle recognition device of the present embodiment further describes the technical solution of the present invention in more detail on the basis of the technical solution of the embodiment shown in fig. 2.
As shown in fig. 3, the obstacle recognition device of the present embodiment further includes: an obstacle identification module 13. The obstacle identification module 13 is configured to identify the type of each obstacle to be identified in the point cloud of the (N + 1) th frame according to the first obstacle type map of the (N + 1) th frame predicted by the prediction module 12 and the information of the obstacle to be identified of the (N + 1) th frame acquired by the obstacle information acquisition module 10.
Further optionally, in the obstacle recognition device of this embodiment, the obstacle recognition module 13 is specifically configured to:
according to the first obstacle category map of the (N + 1) th frame, identifying the category of each obstacle to be identified in the point cloud of the obstacle to be identified of the (N + 1) th frame;
and judging whether the same obstacle to be recognized in the point cloud of the obstacle to be recognized in the (N + 1) th frame is marked with more than two different categories, if so, marking the category of the obstacle to be recognized according to the number of points respectively corresponding to the more than two different categories in the point cloud of the obstacle to be recognized.
Further optionally, in the obstacle identification device according to this embodiment, the parameter information obtaining module 11 is specifically configured to:
acquiring point cloud layers with at least two heights parallel to a horizontal plane according to point clouds of obstacles to be identified in each frame of the previous N frames; respectively projecting the point cloud layers with at least two heights on a horizontal plane to obtain at least two first point cloud projection images corresponding to each frame;
according to the reflection values of all points on the surface of the obstacle to be recognized in each frame of the previous N frames, identifying the reflection values of all points on the surface of the obstacle to be recognized in the projection of the point cloud of the obstacle to be recognized in each frame on the horizontal plane, and obtaining a first reflection information projection graph corresponding to each frame;
and acquiring a first duty ratio projection drawing of the point cloud of the obstacle to be identified in each frame in the horizontal plane according to the point cloud of the obstacle to be identified in each frame of the previous N frames.
Further optionally, as shown in fig. 3, the obstacle identification device of this embodiment further includes: the obstacle category acquisition module 14.
The obstacle type obtaining module 14 is configured to obtain a first obstacle type map of an obstacle to be identified in each frame of the previous N frames on the horizontal plane;
further, the obstacle category obtaining module 14 is specifically configured to:
acquiring a first obstacle category map of an obstacle to be identified in a1 st frame on a horizontal plane from a static map;
predicting a first obstacle category diagram of an i +1 th frame according to a pre-trained classifier model, a first obstacle category diagram of an obstacle to be recognized in each frame in the previous i frames on a horizontal plane, at least two first point cloud projection diagrams, a first reflection information projection diagram and a first duty ratio projection diagram of each frame in the previous i frames, which are obtained in advance; wherein i is an integer of 1-1 i-1.
Correspondingly, the prediction module 12 is configured to predict the first obstacle category map of the N +1 th frame in the N +1 th frame according to the pre-trained classifier model, the first obstacle category map of the obstacle to be recognized in the horizontal plane in each frame in the previous N frames acquired in advance by the obstacle category acquisition module 14, the preset weight of each frame in the previous N frames, and the at least two first point cloud projection maps, the first reflection information projection map, and the first duty ratio projection map of each frame in the previous N frames acquired by the parameter information acquisition module 11.
Further optionally, as shown in fig. 3, the obstacle identification device of this embodiment further includes: a weight setting module 15.
Wherein the weight setting module 15 is used for setting the weight W for the jth frame in the previous N framesjThe j +1 th frame sets a weight Wj+1Wherein W isj+1>Wj(ii) a Wherein j is an integer of 1-N; or
Setting a weight Q for the 1 st frame to the int (N/2) th frame in the first N frames, setting a weight R for the int (N/2) th to the Nth frames in the first N frames, wherein R is larger than Q.
Correspondingly, the prediction module 12 is configured to predict the first obstacle category map of the N +1 th frame in the N +1 th frame according to the pre-trained classifier model, the first obstacle category map of the obstacle to be identified in the horizontal plane in each frame in the previous N frames acquired in advance by the obstacle category acquisition module 14, the weight of each frame in the previous N frames preset by the weight setting module 15, and the at least two first point cloud projection maps, the first reflection information projection map, and the first duty ratio projection map of each frame in the previous N frames acquired by the parameter information acquisition module 11.
Further optionally, as shown in fig. 3, the obstacle identification device of this embodiment further includes: an acquisition module 16 and a training module 17.
The acquisition module 16 is configured to acquire information of a plurality of groups of consecutive N +1 frames of preset obstacles of known categories, and generate an obstacle training set; the information of the preset barrier of each frame comprises point cloud of the preset barrier and a reflection value of each point of the preset barrier;
the training module 17 is configured to train a classifier model according to the information of the preset obstacles of the plurality of groups of consecutive N +1 frames in the obstacle training set acquired by the acquisition module 16.
Further optionally, in the obstacle recognition device of this embodiment, the training module 17 is specifically configured to:
respectively acquiring a second point cloud projection image of point cloud layers with at least two heights of a preset obstacle in each group of frames on a horizontal plane, a second reflection information projection image of the preset obstacle in each group of frames on the horizontal plane and a second duty ratio projection image of the preset obstacle in each group of frames on the horizontal plane according to the preset obstacle information in each frame of the first N frames in the N +1 frames in each group of the obstacle training set;
and training a classifier model according to a second obstacle class diagram of a preset obstacle on a horizontal plane in each group of pre-acquired frames of the previous N frames, a preset weight of each frame in each group of frames, at least two second point cloud projection diagrams, a second reflection information projection diagram and a second duty ratio projection diagram of each frame in the previous N frames and the known class of the preset obstacle corresponding to each group, so as to determine the classifier model.
Correspondingly, the prediction module 12 is configured to predict the first obstacle category map of the N +1 th frame in the N +1 th frame according to the classifier model trained in advance by the training module 16, the first obstacle category map of the horizontal plane of the obstacle to be identified in each frame in the first N frames acquired in advance by the obstacle category acquisition module 14, the weight of each frame in the first N frames preset by the weight setting module 15, and the at least two first point cloud projection maps, the first reflection information projection map, and the first duty ratio projection map of each frame in the first N frames acquired by the parameter information acquisition module 11.
The obstacle identification device of this embodiment identifies the obstacle to be identified by using the module, and the implementation principle and the technical effect of the related method embodiment are the same, so that reference may be made to the description of the related method embodiment in detail, and details are not repeated here.
The invention also provides a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor executes the computer program to implement the obstacle identification method as shown in the above embodiments.
For example, fig. 4 is a block diagram of a computer device provided in the present invention. FIG. 4 illustrates a block diagram of an exemplary computer device 12a suitable for use in implementing embodiments of the present invention. The computer device 12a shown in FIG. 4 is only an example and should not bring any limitations to the functionality or scope of use of embodiments of the present invention.
As shown in FIG. 4, computer device 12a is in the form of a general purpose computing device. The components of computer device 12a may include, but are not limited to: one or more processors 16a, a system memory 28a, and a bus 18a that connects the various system components (including the system memory 28a and the processors 16 a).
Bus 18a represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Computer device 12a typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer device 12a and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 28a may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)30a and/or cache memory 32 a. Computer device 12a may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34a may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 4, and commonly referred to as a "hard drive"). Although not shown in FIG. 4, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 18a by one or more data media interfaces. System memory 28a may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of the various embodiments of the invention described above in fig. 1-3.
A program/utility 40a having a set (at least one) of program modules 42a may be stored, for example, in system memory 28a, such program modules 42a including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may include an implementation of a network environment. Program modules 42a generally perform the functions and/or methodologies described above in connection with the various embodiments of fig. 1-3 of the present invention.
Computer device 12a may also communicate with one or more external devices 14a (e.g., keyboard, pointing device, display 24a, etc.), and may also communicate with one or more devices that enable a user to interact with the computer device 12a, and/or with any device (e.g., network card, modem, etc.) that enables the computer device 12a to communicate with one or more other computing devices, such communication may occur via AN input/output (I/O) interface 22 a. furthermore, computer device 12a may also communicate with one or more networks (e.g., a local area network (L), AN Wide Area Network (WAN), and/or a public network, such as the Internet) via a network adapter 20 a. As shown, network adapter 20a communicates with other modules of computer device 12a via bus 18 a. it should be appreciated that, although not shown, other hardware and/or software modules may be used in conjunction with computer device 12a, including, but not limited to, microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, and data storage systems, etc.
The processor 16a executes various functional applications and data processing by executing programs stored in the system memory 28a, for example, to implement the obstacle recognition method shown in the above-described embodiment.
The present invention also provides a computer-readable medium on which a computer program is stored, which when executed by a processor implements the obstacle identifying method as shown in the above embodiments.
The computer-readable media of this embodiment may include RAM30a, and/or cache memory 32a, and/or storage system 34a in system memory 28a in the embodiment illustrated in fig. 4 described above.
With the development of technology, the propagation path of computer programs is no longer limited to tangible media, and the computer programs can be directly downloaded from a network or acquired by other methods. Accordingly, the computer-readable medium in the present embodiment may include not only tangible media but also intangible media.
The computer-readable medium of the present embodiments may take any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including AN object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
In the embodiments provided in the present invention, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the units is only one logical functional division, and other divisions may be realized in practice.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute some steps of the methods according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (18)

1. An obstacle identification method, characterized in that the method comprises:
acquiring information of obstacles to be identified of continuous N +1 frames around a current vehicle scanned by a laser radar;
according to the information of the obstacles to be recognized in each frame of the first N frames in the N +1 frames, acquiring a first point cloud projection diagram of a point cloud layer with at least two heights in each frame on a horizontal plane, a first reflection information projection diagram of the obstacles to be recognized in each frame on the horizontal plane, and a first duty ratio projection diagram of the obstacles to be recognized in each frame on the horizontal plane;
predicting the first obstacle category map of the (N + 1) th frame in the (N + 1) th frame according to a pre-trained classifier model, a pre-acquired first obstacle category map of the obstacle to be recognized in each frame in the previous N frames on the horizontal plane, a pre-set weight of each frame in the previous N frames, and at least two first point cloud projection maps, the first reflection information projection map and the first duty ratio projection map of each frame in the previous N frames.
2. The method according to claim 1, wherein after predicting the first obstacle category map of the N +1 th frame in the N +1 th frame according to a pre-trained classifier model, a pre-obtained first obstacle category map of the obstacle to be identified in the horizontal plane in each of the N previous frames, a pre-set weight of each of the N previous frames, and at least two of the first point cloud projection map, the first reflection information projection map, and the first duty ratio projection map of each of the N previous frames, the method further comprises:
and identifying the category of each obstacle to be identified in the point cloud of the (N + 1) th frame according to the first obstacle category map of the (N + 1) th frame and the information of the obstacle to be identified of the (N + 1) th frame.
3. The method according to claim 2, wherein identifying the category of each obstacle to be identified in the point cloud of the N +1 th frame according to the first obstacle category map of the N +1 th frame and the information of the obstacle to be identified of the N +1 th frame specifically comprises:
according to the first obstacle category map of the (N + 1) th frame, identifying the category of each obstacle to be identified in the point cloud of the obstacle to be identified of the (N + 1) th frame;
and judging whether more than two different categories are identified on the same obstacle to be identified in the point cloud of the obstacle to be identified in the (N + 1) th frame, if so, identifying the category of the obstacle to be identified according to the number of points respectively corresponding to more than two different categories in the point cloud of the obstacle to be identified.
4. The method according to claim 1, wherein obtaining, according to information of the obstacle to be identified in each frame of a first N frames of the N +1 frames, a first point cloud projection view of a point cloud layer of at least two heights in each frame in a horizontal plane, a first reflection information projection view of the obstacle to be identified in each frame in the horizontal plane, and a first duty ratio projection view of the obstacle to be identified in each frame in the horizontal plane specifically includes:
acquiring point cloud layers with at least two heights parallel to the horizontal plane according to the point clouds of the obstacles to be identified in each frame of the previous N frames; projecting the point cloud layers with the at least two heights on the horizontal plane respectively to obtain at least two first point cloud projection drawings corresponding to each frame;
according to the reflection values of all points on the surface of the obstacle to be recognized in each frame of the previous N frames, identifying the reflection values of all points on the surface of the obstacle to be recognized in the projection of the point cloud of the obstacle to be recognized in each frame on the horizontal plane, and obtaining the first reflection information projection graph corresponding to each frame;
and acquiring the first duty ratio projection drawing of the point cloud of the obstacle to be identified in each frame on the horizontal plane according to the point cloud of the obstacle to be identified in each frame of the previous N frames.
5. The method according to claim 1, wherein the first obstacle category map of the N +1 th frame in the N +1 th frame is predicted according to a pre-trained classifier model, a pre-obtained first obstacle category map of the obstacle to be identified in the horizontal plane in each of the N previous frames, a pre-set weight of each of the N previous frames, and at least two of the first point cloud projection map, the first reflection information projection map, and the first duty ratio projection map of each of the N previous frames, and the method further comprises:
acquiring the first obstacle category map of the obstacle to be identified in each frame of the previous N frames on the horizontal plane;
further, acquiring the first obstacle category map of the obstacle to be identified in each of the previous N frames on the horizontal plane specifically includes:
acquiring the first obstacle category map of the obstacle to be identified in the 1 st frame at the horizontal plane from a static map;
predicting the first obstacle category map of the i +1 th frame according to a pre-trained classifier model, the first obstacle category map of the obstacle to be recognized in each frame in the previous i frames in the horizontal plane, and at least two first point cloud projection maps, the first reflection information projection map and the first duty ratio projection map of each frame in the previous i frames, which are obtained in advance; wherein i is an integer of 1-1 i-1.
6. The method according to claim 1, wherein the first obstacle category map of the N +1 th frame in the N +1 th frame is predicted according to a pre-trained classifier model, a pre-obtained first obstacle category map of the obstacle to be identified in the horizontal plane in each of the N previous frames, a pre-set weight of each of the N previous frames, and at least two of the first point cloud projection map, the first reflection information projection map, and the first duty ratio projection map of each of the N previous frames, and the method further comprises:
setting a weight W for a jth frame in the previous N framesjThe j +1 th frame sets a weight Wj+1Wherein W isj+1>Wj(ii) a Wherein j is an integer which is more than or equal to 1 and less than or equal to N; or
Setting a weight Q for the 1 st frame to the int (N/2) th frame in the first N frames, setting a weight R for the int (N/2) +1 to the Nth frame in the first N frames, and the R is greater than the Q.
7. The method according to any one of claims 1 to 6, wherein the method further includes, according to a pre-trained classifier model, a pre-obtained first obstacle category map of the obstacle to be identified in each of the previous N frames at the horizontal plane, a pre-set weight of each of the previous N frames, and at least two of the first point cloud projection map, the first reflection information projection map, and the first duty ratio projection map of each of the previous N frames, predicting the first obstacle category map of an N +1 th frame of the N +1 frames, before:
collecting information of a plurality of groups of continuous N +1 frames of preset obstacles with known categories to generate an obstacle training set; the information of the preset barrier of each frame comprises a point cloud of the preset barrier and a reflection value of each point of the preset barrier;
and training the classifier model according to the information of the preset obstacles of a plurality of groups of continuous N +1 frames in the obstacle training set.
8. The method according to claim 7, wherein training the classifier model according to the information of the preset obstacles of the plurality of groups of consecutive N +1 frames in the obstacle training set specifically comprises:
respectively acquiring a second point cloud projection drawing of point cloud layers with at least two heights of the preset obstacles in each frame of each group, a second reflection information projection drawing of the preset obstacles in each frame in the horizontal plane and a second duty ratio projection drawing of the preset obstacles in each frame in the horizontal plane according to the information of the preset obstacles in each frame of the first N frames in the N +1 frames of each group in the obstacle training set;
training the classifier model according to a second obstacle category graph of the preset obstacle in the horizontal plane in each frame of the previous N frames of each group, preset weight of each frame in the previous N frames of each group, at least two second point cloud projection graphs, the second reflection information projection graph and the second duty ratio projection graph of each frame in the previous N frames, and known categories of the preset obstacle corresponding to each group, so as to determine the classifier model.
9. An obstacle recognition apparatus, characterized in that the apparatus comprises:
the obstacle information acquisition module is used for acquiring information of obstacles to be identified, which are obtained by scanning continuous N +1 frames around the current vehicle by the laser radar;
a parameter information obtaining module, configured to obtain, according to information of the obstacle to be identified in each frame of a previous N frames of the N +1 frames, a first point cloud projection view of a point cloud layer of at least two heights in each frame on a horizontal plane, a first reflection information projection view of the obstacle to be identified in each frame on the horizontal plane, and a first duty ratio projection view of the obstacle to be identified in each frame on the horizontal plane;
a prediction module, configured to predict the first obstacle category map of an N +1 th frame in the N +1 th frame according to a pre-trained classifier model, a pre-obtained first obstacle category map of the obstacle to be recognized in each frame in the previous N frames on the horizontal plane, a pre-set weight of each frame in the previous N frames, and at least two first point cloud projection maps, the first reflection information projection map, and the first duty ratio projection map of each frame in the previous N frames.
10. The apparatus of claim 9, further comprising:
and the obstacle identification module is used for identifying the category of each obstacle to be identified in the point cloud of the (N + 1) th frame according to the first obstacle category map of the (N + 1) th frame and the information of the obstacle to be identified of the (N + 1) th frame.
11. The apparatus according to claim 10, wherein the obstacle identification module is specifically configured to:
according to the first obstacle category map of the (N + 1) th frame, identifying the category of each obstacle to be identified in the point cloud of the obstacle to be identified of the (N + 1) th frame;
and judging whether more than two different categories are identified on the same obstacle to be identified in the point cloud of the obstacle to be identified in the (N + 1) th frame, if so, identifying the category of the obstacle to be identified according to the number of points respectively corresponding to more than two different categories in the point cloud of the obstacle to be identified.
12. The apparatus of claim 9, wherein the parameter information obtaining module is specifically configured to:
acquiring point cloud layers with at least two heights parallel to the horizontal plane according to the point clouds of the obstacles to be identified in each frame of the previous N frames; projecting the point cloud layers with the at least two heights on the horizontal plane respectively to obtain at least two first point cloud projection drawings corresponding to each frame;
according to the reflection values of all points on the surface of the obstacle to be identified in each frame of the previous N frames, identifying the reflection values of all points on the surface of the obstacle to be identified in the projection of the point cloud of the obstacle to be identified in each frame on the horizontal plane, and obtaining the first reflection information projection drawing corresponding to each frame;
and acquiring the first duty ratio projection drawing of the point cloud of the obstacle to be identified in each frame on the horizontal plane according to the point cloud of the obstacle to be identified in each frame of the previous N frames.
13. The apparatus of claim 9, further comprising:
an obstacle category obtaining module, configured to obtain the first obstacle category map of the obstacle to be identified in the horizontal plane in each of the previous N frames;
further, the obstacle category acquiring module is specifically configured to:
acquiring the first obstacle category map of the obstacle to be identified in the 1 st frame at the horizontal plane from a static map;
predicting the first obstacle category map of the i +1 th frame according to a pre-trained classifier model, the first obstacle category map of the obstacle to be recognized in each frame in the previous i frames in the horizontal plane, and at least two first point cloud projection maps, the first reflection information projection map and the first duty ratio projection map of each frame in the previous i frames, which are obtained in advance; wherein i is an integer of 1-1 i-1.
14. The apparatus of claim 9, further comprising:
a weight setting module for setting a weight W for the jth frame in the previous N framesjThe j +1 th frame sets a weight Wj+1Wherein W isj+1>Wj(ii) a Wherein j is an integer which is more than or equal to 1 and less than or equal to N; or
Setting a weight Q for the 1 st frame to the int (N/2) th frame in the first N frames, setting a weight R for the int (N/2) +1 to the Nth frame in the first N frames, and the R is greater than the Q.
15. The apparatus of any of claims 9-14, further comprising:
the acquisition module is used for acquiring information of a plurality of groups of continuous N +1 frames of preset obstacles of known types to generate an obstacle training set; the information of the preset barrier of each frame comprises a point cloud of the preset barrier and a reflection value of each point of the preset barrier;
and the training module is used for training the classifier model according to the information of the preset obstacles of a plurality of groups of continuous N +1 frames in the obstacle training set.
16. The apparatus of claim 15, wherein the training module is specifically configured to:
respectively acquiring a second point cloud projection drawing of point cloud layers with at least two heights of the preset obstacles in each frame of each group, a second reflection information projection drawing of the preset obstacles in each frame in the horizontal plane and a second duty ratio projection drawing of the preset obstacles in each frame in the horizontal plane according to the information of the preset obstacles in each frame of the first N frames in the N +1 frames of each group in the obstacle training set;
training the classifier model according to a second obstacle category graph of the preset obstacle in the horizontal plane in each frame of the previous N frames of each group, preset weight of each frame in the previous N frames of each group, at least two second point cloud projection graphs, the second reflection information projection graph and the second duty ratio projection graph of each frame in the previous N frames, and known categories of the preset obstacle corresponding to each group, so as to determine the classifier model.
17. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1-8 when executing the program.
18. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-8.
CN201710073031.3A 2017-02-10 2017-02-10 Obstacle identification method and device, computer equipment and readable medium Active CN106919908B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710073031.3A CN106919908B (en) 2017-02-10 2017-02-10 Obstacle identification method and device, computer equipment and readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710073031.3A CN106919908B (en) 2017-02-10 2017-02-10 Obstacle identification method and device, computer equipment and readable medium

Publications (2)

Publication Number Publication Date
CN106919908A CN106919908A (en) 2017-07-04
CN106919908B true CN106919908B (en) 2020-07-28

Family

ID=59453621

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710073031.3A Active CN106919908B (en) 2017-02-10 2017-02-10 Obstacle identification method and device, computer equipment and readable medium

Country Status (1)

Country Link
CN (1) CN106919908B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109117825B (en) 2018-09-04 2020-01-17 百度在线网络技术(北京)有限公司 Lane line processing method and device
CN109145489B (en) 2018-09-07 2020-01-17 百度在线网络技术(北京)有限公司 Obstacle distribution simulation method and device based on probability chart and terminal
CN109143242B (en) 2018-09-07 2020-04-14 百度在线网络技术(北京)有限公司 Obstacle absolute velocity estimation method, system, computer device, and storage medium
CN109255181B (en) 2018-09-07 2019-12-24 百度在线网络技术(北京)有限公司 Obstacle distribution simulation method and device based on multiple models and terminal
CN109215136B (en) 2018-09-07 2020-03-20 百度在线网络技术(北京)有限公司 Real data enhancement method and device and terminal
CN109059780B (en) 2018-09-11 2019-10-15 百度在线网络技术(北京)有限公司 Detect method, apparatus, equipment and the storage medium of obstacle height
CN109165629B (en) 2018-09-13 2019-08-23 百度在线网络技术(北京)有限公司 It is multifocal away from visual barrier cognitive method, device, equipment and storage medium
CN109513629B (en) * 2018-11-14 2021-06-11 深圳蓝胖子机器智能有限公司 Method, device and computer readable storage medium for sorting packages
CN109513630B (en) * 2018-11-14 2021-06-11 深圳蓝胖子机器智能有限公司 Package sorting system, control method thereof and storage medium
CN109703568B (en) 2019-02-19 2020-08-18 百度在线网络技术(北京)有限公司 Method, device and server for learning driving strategy of automatic driving vehicle in real time
CN109712421B (en) 2019-02-22 2021-06-04 百度在线网络技术(北京)有限公司 Method, apparatus and storage medium for speed planning of autonomous vehicles
CN113110451B (en) * 2021-04-14 2023-03-14 浙江工业大学 Mobile robot obstacle avoidance method based on fusion of depth camera and single-line laser radar

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102013102153A1 (en) * 2012-03-15 2013-09-19 GM Global Technology Operations LLC Method for combining sensor signals of LiDAR-sensors, involves defining transformation value for one of two LiDAR sensors, which identifies navigation angle and position of sensor, where target scanning points of objects are provided
US8996228B1 (en) * 2012-09-05 2015-03-31 Google Inc. Construction zone object detection using light detection and ranging
US9383753B1 (en) * 2012-09-26 2016-07-05 Google Inc. Wide-view LIDAR with areas of special attention
CN106295586A (en) * 2016-08-16 2017-01-04 长春理工大学 Humanoid target identification method based on single line cloud data machine learning and device

Also Published As

Publication number Publication date
CN106919908A (en) 2017-07-04

Similar Documents

Publication Publication Date Title
CN106919908B (en) Obstacle identification method and device, computer equipment and readable medium
CN106934347B (en) Obstacle identification method and device, computer equipment and readable medium
CN106845412B (en) Obstacle identification method and device, computer equipment and readable medium
CN106951847B (en) Obstacle detection method, apparatus, device and storage medium
CN109509260B (en) Labeling method, equipment and readable medium of dynamic obstacle point cloud
CN110286387B (en) Obstacle detection method and device applied to automatic driving system and storage medium
JP6892484B2 (en) Method of generating object detection frame and its device, equipment, storage medium and vehicle
US11488392B2 (en) Vehicle system and method for detecting objects and object distance
US10977501B2 (en) Object classification using extra-regional context
CN106845416B (en) Obstacle identification method and device, computer equipment and readable medium
CN106709475B (en) Obstacle recognition method and device, computer equipment and readable storage medium
CN110163930A (en) Lane line generation method, device, equipment, system and readable storage medium storing program for executing
WO2020107020A1 (en) Lidar-based multi-person pose estimation
US11747452B2 (en) Distance measuring device
US11410388B1 (en) Devices, systems, methods, and media for adaptive augmentation for a point cloud dataset used for training
CN112926461B (en) Neural network training and driving control method and device
CN114067564A (en) Traffic condition comprehensive monitoring method based on YOLO
KR20200039548A (en) Learning method and testing method for monitoring blind spot of vehicle, and learning device and testing device using the same
CN111382735A (en) Night vehicle detection method, device, equipment and storage medium
US20220171975A1 (en) Method for Determining a Semantic Free Space
CN114595738A (en) Method for generating training data for recognition model and method for generating recognition model
CN112639822B (en) Data processing method and device
CN110727269B (en) Vehicle control method and related product
KR20200040187A (en) Learning method and testing method for monitoring blind spot of vehicle, and learning device and testing device using the same
CN114429631B (en) Three-dimensional object detection method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant