CN117423091A - Obstacle detection method and device, electronic equipment and storage medium - Google Patents

Obstacle detection method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117423091A
CN117423091A CN202311442015.9A CN202311442015A CN117423091A CN 117423091 A CN117423091 A CN 117423091A CN 202311442015 A CN202311442015 A CN 202311442015A CN 117423091 A CN117423091 A CN 117423091A
Authority
CN
China
Prior art keywords
obstacle
suspected
determining
road surface
suspected obstacle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311442015.9A
Other languages
Chinese (zh)
Inventor
庞伟凇
王宇
陈�光
李锦瑭
孙雪
耿真
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Faw Nanjing Technology Development Co ltd
FAW Group Corp
Original Assignee
Faw Nanjing Technology Development Co ltd
FAW Group Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Faw Nanjing Technology Development Co ltd, FAW Group Corp filed Critical Faw Nanjing Technology Development Co ltd
Priority to CN202311442015.9A priority Critical patent/CN117423091A/en
Publication of CN117423091A publication Critical patent/CN117423091A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the invention discloses a method, a device, electronic equipment and a storage medium for detecting an obstacle, wherein the method comprises the following steps: performing road surface identification based on the current point cloud image of the target vehicle, and determining a current running road surface; if the current driving road surface is a preset road surface, acquiring position distribution information and reflectivity corresponding to a first suspected obstacle in the current point cloud image; determining a second suspected obstacle from the first suspected obstacle based on the position distribution information and the reflectivity corresponding to the first suspected obstacle; and determining a non-obstacle in the second suspected obstacle based on the position distribution information corresponding to the vehicle obstacle in the first suspected obstacle and the position distribution information corresponding to the second suspected obstacle. According to the technical scheme provided by the embodiment of the invention, the obstacle can be accurately judged in the intelligent driving or auxiliary driving process, so that the obstacle and the non-obstacle can be more accurately and effectively distinguished, and the safety of intelligent driving is improved.

Description

Obstacle detection method and device, electronic equipment and storage medium
Technical Field
The embodiment of the invention relates to a computer technology, in particular to a method and a device for detecting an obstacle, electronic equipment and a storage medium.
Background
With the development of intelligent driving technology, intelligent driving safety becomes more and more important. The accuracy of sensor perception influences the security level of intelligent driving car. The laser radar has become one of the indispensable sensors in intelligent driving and auxiliary driving gradually by virtue of the advantages of all-weather application, centimeter-level ranging precision, backlight resistance and the like.
At present, as the traditional segmentation method (such as an obstacle segmentation algorithm) is unsupervised, the laser radar cannot acquire any priori information, so that the laser radar can have the situation of erroneous judgment of obstacles, such as larger splash of a running motor vehicle under the conditions of rainy days, wet road surfaces or accumulated water on the road surfaces. The infrared light of the lidar will scan the water spray and the reflected echo is received by the receiver of the lidar, so that the scanned target (i.e. the water spray) will be determined as an obstacle interfering with the normal running of the vehicle and avoidance or emergency braking will be performed. Therefore, the traditional segmentation method has the situation of erroneous judgment of the obstacle, and reduces the safety of intelligent driving.
Disclosure of Invention
The embodiment of the invention provides an obstacle detection method, an obstacle detection device, electronic equipment and a storage medium, so that the obstacle can be accurately judged in the intelligent driving or auxiliary driving process, the obstacle and the non-obstacle can be more accurately and effectively distinguished, and the safety of intelligent driving is improved.
In a first aspect, an embodiment of the present invention provides a method for detecting an obstacle, including:
performing road surface identification based on the current point cloud image of the target vehicle, and determining a current running road surface;
if the current driving road surface is a preset road surface, acquiring position distribution information and reflectivity corresponding to a first suspected obstacle in the current point cloud image;
determining a second suspected obstacle from the first suspected obstacle based on the position distribution information and the reflectivity corresponding to the first suspected obstacle;
and determining non-obstacles in the second suspected obstacle based on the position distribution information corresponding to the vehicle obstacle in the first suspected obstacle and the position distribution information corresponding to the second suspected obstacle.
In a second aspect, an embodiment of the present invention provides an obstacle detection device, including:
the current driving road surface determining module is used for identifying the road surface based on the current point cloud image of the target vehicle and determining the current driving road surface;
the information acquisition module is used for acquiring position distribution information and reflectivity corresponding to a first suspected obstacle in the current point cloud image if the current running road surface is a preset road surface;
The second suspected obstacle determining module is used for determining a second suspected obstacle from the first suspected obstacle based on the position distribution information and the reflectivity corresponding to the first suspected obstacle;
the non-obstacle determination module is used for determining non-obstacles in the second suspected obstacle based on the position distribution information corresponding to the vehicle obstacle in the first suspected obstacle and the position distribution information corresponding to the second suspected obstacle.
In a third aspect, an embodiment of the present invention provides an electronic device, including:
one or more processors;
a memory for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the obstacle detection method as provided by any embodiment of the invention.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the obstacle detection method as provided by any embodiment of the present invention.
According to the technical scheme, the road surface identification is carried out based on the current point cloud image of the target vehicle, so that the current running road surface is determined; if the current running road surface is a preset road surface, acquiring position distribution information and reflectivity corresponding to a first suspected obstacle in the current point cloud image, so that a screening mode of a non-obstacle can be determined based on the condition of the current running road surface; determining a second suspected obstacle from the first suspected obstacle based on the position distribution information and the reflectivity corresponding to the first suspected obstacle; based on the position distribution information corresponding to the vehicle obstacle in the first suspected obstacle and the position distribution information corresponding to the second suspected obstacle, determining the non-obstacle in the second suspected obstacle, so that the obstacle can be accurately judged in the intelligent driving or auxiliary driving process, the obstacle and the non-obstacle can be more accurately and effectively distinguished, and the intelligent driving safety is improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the invention or to delineate the scope of the invention. Other features of the present invention will become apparent from the description that follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a method for detecting an obstacle according to a first embodiment of the present invention;
FIG. 2 is an exemplary diagram of an intersection ratio according to a first embodiment of the present invention;
fig. 3 is a flowchart of a method for detecting an obstacle according to a second embodiment of the present invention;
fig. 4 is a schematic structural diagram of an obstacle detecting apparatus according to a third embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device implementing the obstacle detection method according to the embodiment of the invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
Fig. 1 is a flowchart of an obstacle detection method according to an embodiment of the present invention, where the method may be applied to a situation where an obstacle is automatically determined during intelligent driving or assisted driving, and the method may be performed by an obstacle detection device, where the obstacle detection device may be implemented in hardware and/or software, and where the obstacle detection device may be configured in an electronic device. As shown in fig. 1, the method includes:
S110, road surface identification is carried out based on the current point cloud image of the target vehicle, and the current running road surface is determined.
The target vehicle may be a vehicle having an intelligent driving function or an auxiliary driving function. A point cloud may refer to a massive set of points that express the spatial distribution of an object and the surface characteristics of the object under the same spatial reference frame. For example, the point cloud image may be used to characterize an object or scene scanned by the radar. The current point cloud image may refer to a point cloud image acquired at the current time. For example, the current point cloud image may refer to a point cloud image corresponding to the current frame. The current running road surface may refer to a road surface on which the target vehicle is currently running.
Specifically, road surface recognition is performed according to a current point cloud image acquired by a target vehicle at the current moment. In this embodiment, the determination is mainly made with respect to the actual condition of the road surface. For example, it may be determined that the current road surface condition is a road surface condition in a rainy day scene based on the current point cloud image. Due to wet road surfaces or the presence of water accumulation in rainy days, the infrared light emitted by the target vehicle is specularly reflected, resulting in almost no point representing the ground being scanned. Therefore, the road surface identification can be performed based on the current point cloud image of the target vehicle, and when the number of points for representing the ground is scanned within the number of points for representing the ground in the rainy scene, it can be determined that the current running road surface is a road surface that can splash water in the rainy scene. Of course, when the number of points for representing the ground is scanned to satisfy the number section corresponding to the other road surface type, it may be determined that the current traveling road surface is the road surface of that type.
And S120, if the current driving road surface is a preset road surface, acquiring position distribution information and reflectivity corresponding to the first suspected obstacle in the current point cloud image.
The preset road surface can be a road surface capable of splashing water. For example, the preset road surface may be, but is not limited to, a road surface in a rainy scene, a wet road surface, or a watershed road surface. The first suspected obstacle may refer to all obstacles that the target vehicle determines based on the current point cloud image. However, in the current point cloud image, splashed water is also truly present, so that the conventional obstacle segmentation algorithm also determines the water as an obstacle. The method and the device can be applied to a traditional obstacle segmentation algorithm to identify and filter non-real obstacles such as water bloom, namely non-obstacles. For example, the first suspected obstacle may include a real obstacle and a non-obstacle. The real obstacle may refer to an object that may affect the normal running of the target vehicle. For example, real obstacles may include, but are not limited to, other vehicles, pedestrians, or road blocks adjacent to the target vehicle. The non-obstacle may refer to an object that does not affect normal running of the target vehicle. For example, non-obstructions may include, but are not limited to, splashed water, hail, or wind-blown plastic bags. Each first suspected obstacle in the current point cloud image is composed of a plurality of scanning points. When each scanning point is acquired, corresponding position coordinate information is acquired in the same coordinate system. The position distribution information may refer to position coordinate information corresponding to each scanning point constituting the first suspected obstacle. The reflectivity may refer to a reflectivity corresponding to each scanning point constituting the first suspected obstacle.
Specifically, if the current driving road surface is a preset road surface, which indicates that the target vehicle may scan splashed water and other non-obstacles in the driving process, automatically starting a non-obstacle filtering function corresponding to the preset road surface, and acquiring position distribution information and reflectivity corresponding to a first suspected obstacle in the current point cloud image. If the current driving road surface is not the preset road surface, the target vehicle cannot scan splashed water and other non-obstacles in the driving process, the non-obstacle filtering function corresponding to the preset road surface is not automatically started, and the current driving road surface detection is continuously carried out on the current point cloud image.
S130, determining a second suspected obstacle from the first suspected obstacle based on the position distribution information and the reflectivity corresponding to the first suspected obstacle.
Wherein the second suspected obstacle may refer to a suspected obstacle in which a portion screened from the first suspected obstacle is more likely to be a non-obstacle. The basis for determining the second suspected obstacle from the first suspected obstacle is the corresponding physical attribute of each obstacle.
Specifically, whether the position distribution information corresponding to the first suspected obstacle meets the position distribution information corresponding to the real obstacle is detected. If not, detecting whether the reflectivity corresponding to the first suspected obstacle meets the reflectivity corresponding to the real obstacle. If not, the first suspected obstacle is determined to be a second suspected obstacle.
And S140, determining non-obstacle in the second suspected obstacle based on the position distribution information corresponding to the vehicle obstacle in the first suspected obstacle and the position distribution information corresponding to the second suspected obstacle.
Wherein, the vehicle obstacle may refer to a vehicle adjacent to the periphery of the target vehicle. In this embodiment, by using the position distribution information corresponding to the obstacle of the vehicle and the position distribution information corresponding to the second suspected obstacle, it is possible to determine whether the second suspected obstacle is caused or generated by a vehicle adjacent to the periphery of the target vehicle. Specifically, if it is determined that the second suspected obstacle is caused by the vehicle obstacle based on the position distribution information corresponding to the vehicle obstacle and the position distribution information corresponding to the second suspected obstacle, the second suspected obstacle is determined to be a non-obstacle. In this embodiment, the non-obstacle is mainly splashed water.
According to the technical scheme, the road surface identification is carried out based on the current point cloud image of the target vehicle, so that the current running road surface is determined; if the current running road surface is a preset road surface, acquiring position distribution information and reflectivity corresponding to a first suspected obstacle in the current point cloud image, so that a screening mode of the non-obstacle can be determined based on the condition of the current running road surface; determining a second suspected obstacle from the first suspected obstacle based on the position distribution information and the reflectivity corresponding to the first suspected obstacle; based on the position distribution information corresponding to the vehicle obstacle in the first suspected obstacle and the position distribution information corresponding to the second suspected obstacle, determining the non-obstacle in the second suspected obstacle, so that the obstacle can be accurately judged in the intelligent driving or auxiliary driving process, the obstacle and the non-obstacle can be more accurately and effectively distinguished, and the intelligent driving safety is improved.
Based on the above technical solution, S110 may include: determining the number of ground points in the current point cloud image of the target vehicle; based on the number of ground points, a current road surface state is determined.
Wherein the ground point may be a point in the pointing cloud image representing the ground. The ground point may also be a point obtained by the target vehicle scanning the ground. The number of ground points may refer to the number of points in the current point cloud image that represent the ground. Specifically, a judgment area of the road surface state is preset, and the number of the ground points is determined from the judgment area corresponding to the current point cloud image. And determining the current road surface state corresponding to the current running road surface of the target vehicle based on the corresponding relation between the number of the ground points and the road surface state and the determined number of the ground points in the judging area.
For example, "determining the number of ground points from the judgment area corresponding to the current point cloud image" may include: determining all points of the point coordinates in the judging area based on the edge coordinates corresponding to the judging area; based on the label marked by the scanning point representing the ground during scanning, the ground point representing the ground is determined from all points of which the point coordinates are in the judging area, and the number of the ground points in the judging area is determined.
Based on the above technical solution, "determining the current road surface state based on the number of ground points" may include: if the number of the ground points is smaller than a preset ground point threshold value, determining that the current running road surface is a preset road surface; and if the number of the ground points is greater than or equal to the preset ground point threshold value, determining that the current running road surface is a non-preset road surface.
Among them, infrared light emitted from a target vehicle is specularly reflected due to the wet road surface or the presence of water accumulation in a rainy day, resulting in almost no point representing the ground being scanned. Therefore, the corresponding relation between the number of the ground points and various road surface states can be calibrated in advance, so that the current road surface state corresponding to the road surface on which the target vehicle is currently driven can be determined by using the number of the ground points in the current point cloud image. In this embodiment, a road surface on which water spray can be splashed may be a preset road surface, and a road surface on which water spray cannot be splashed may be a non-preset road surface, and if the number of ground points is smaller than a preset ground point threshold value, the current running road surface is determined to be the preset road surface; and if the number of the ground points is greater than or equal to the preset ground point threshold value, determining that the current running road surface is a non-preset road surface.
It should be noted that in an actual application scenario, the preset road surface types may be preset according to the number of ground points in different gear positions, so that the current road surface state corresponding to the current running road surface of the target vehicle may be determined more accurately and more in a diversified manner through the number of ground points in the judgment area.
Based on the technical scheme, the method further comprises the following steps: after determining the non-obstacle in the second suspected obstacle, dynamically tracking the remaining obstacle in the current point cloud image, and determining the existence time length corresponding to the remaining obstacle; detecting the residual obstacle based on the existence time length, and determining the residual obstacle with the existence time length larger than a preset time length threshold value as a target obstacle.
The remaining obstacle may be an obstacle remaining after the non-obstacle is removed from the first suspected obstacle. The presence duration may refer to a duration from when the obstacle is detected in the continuous point cloud image to when the obstacle is not detected in the continuous point cloud image after a certain time. The target obstacle may refer to an object that may affect normal running of the target vehicle. The target obstacle may be understood as the above-mentioned real obstacle.
Specifically, after determining the non-obstacle in the second suspected obstacle, the remaining obstacle in the current point cloud image may be dynamically tracked, so as to detect whether the non-obstacle exists in the remaining obstacle, so as to perform secondary filtering, and improve accuracy of obstacle detection. Dynamic tracking of the remaining obstacle requires joint acquisition of point cloud images corresponding to successive frames before and after the current point cloud image, so that the remaining obstacle in each frame of point cloud image is converted into a world coordinate system. And carrying out residual obstacle matching according to the position information of the residual obstacle in the world coordinate system and an Intersection-over-Union (IoU) to obtain a matching result, thereby determining the existence time of the same residual obstacle detected based on the matching result. For example, if the remaining obstacle a is not matched before the point cloud image corresponding to the current frame, the remaining obstacle a is not matched in the point cloud images corresponding to the current frame and three consecutive frames after the current frame, and the remaining obstacle a is not matched in the point cloud image corresponding to the fourth frame, the matched remaining obstacle a is added to the obstacle list at the current time corresponding to the current frame, the current time is recorded as the starting occurrence time, and the present duration of the remaining obstacle a is updated based on the matching result corresponding to the next frame after the current frame until the remaining obstacle a is not matched in the fourth frame after the current frame, and updating of the present duration of the remaining obstacle a is stopped. And detecting the residual obstacle with the time length longer than the preset time length threshold value in the residual obstacle list to determine the residual obstacle as a target obstacle.
In this embodiment, after all the remaining obstacles in the point cloud image corresponding to each frame are matched, the obstacle list corresponding to each frame may be determined. The remaining obstacle continuously detected is determined based on the obstacle list corresponding to each frame, and the existence time period corresponding to the remaining obstacle is determined based on the first time when the remaining obstacle is detected and the last time when the remaining obstacle is detected.
It should be noted that "matching the remaining obstacle according to the position information and the intersection ratio of the remaining obstacle in the world coordinate system to obtain a matching result" may include: for example, the obstacle list of the history frame is a, in which there are remaining obstacles a1, a2 … an. The obstacle list of the current frame is B, in which there are remaining obstacles B1, B2 … bm. Based on the position information of the remaining obstacles in the world coordinate system, calculating the distance information between every two remaining obstacles respectively, and matching each remaining obstacle according to a Hungary matching strategy to obtain a matching result. For the remaining obstacles, such as a1 and b2, that match successfully, the cross-over ratio between a1 and b2 is calculated. If the intersection ratio is larger than the set intersection ratio threshold value, which indicates that the similarity between a1 and b2 is high, determining that a1 and b2 are the same residual obstacle acquired in the point cloud images of different frames. Thus, b2 corresponding to the current frame can be added to the obstacle list of the history frame, and the existence time period corresponding to each obstacle recorded in the obstacle list of the history frame can be updated. Where the intersection ratio (loU) may refer to the ratio between the intersection and union of two bounding boxes, graphics or patterns corresponding thereto. Fig. 2 gives an example of an intersection ratio, see fig. 2, and the intersection of two bounding boxes can be represented by the area of the overlap between bounding box a and bounding box B. The union of the two bounding boxes can be represented by adding the area corresponding to bounding box a and the area corresponding to bounding box B and clipping out the area corresponding to the intersection.
Example two
Fig. 3 is a flowchart of an obstacle detection method according to a second embodiment of the present invention, where a process of determining a second suspected obstacle from first suspected obstacles is described in detail based on the above embodiments. Wherein the explanation of the same or corresponding terms as those of the above embodiments is not repeated herein. As shown in fig. 3, the method includes:
s310, road surface identification is carried out based on the current point cloud image of the target vehicle, and the current running road surface is determined.
And S320, if the current driving road surface is a preset road surface, acquiring position distribution information and reflectivity corresponding to the first suspected obstacle in the current point cloud image.
S330, determining the size, the height and the occupied area corresponding to the first suspected obstacle based on the position distribution information corresponding to the first suspected obstacle.
Wherein, the dimension can refer to the length, width and height corresponding to the barrier. The height may refer to the ground clearance of the highest point of the obstacle. The footprint may refer to the area taken up by the projection of the obstacle onto the horizontal plane. Specifically, the position distribution information corresponding to the first suspected obstacle includes position distribution information corresponding to all scanning points forming the first suspected obstacle. And determining the size, the height and the occupied area corresponding to the first suspected obstacle based on the position distribution information corresponding to all the scanning points.
S340, determining a third suspected obstacle with the size meeting a preset size threshold value and the height meeting a preset height threshold value from the first suspected obstacle.
Wherein the third suspected obstacle may refer to a suspected obstacle in which a portion screened from the first suspected obstacle is more likely to be a non-obstacle. The preset size threshold may be a threshold preset based on a size corresponding to the non-obstacle. The preset height threshold may be a threshold preset based on the occurrence height corresponding to the non-obstacle. Specifically, if the size corresponding to the first suspected obstacle meets the preset size threshold, judging whether the height corresponding to the first suspected obstacle meets the preset height threshold. If so, the first suspected obstacle is determined to be a third suspected obstacle.
S350, determining the scanning point density and the low-reflection point duty ratio corresponding to the third suspected obstacle based on the number of scanning points, the occupied area and the reflectivity corresponding to the third suspected obstacle.
The number of scanning points may refer to the number of all scanning points constituting the third suspected obstacle. The scan point density may refer to a density of scan points within a certain range. A low inversion point may refer to a scan point where the reflectivity is below a certain threshold. The low inversion point duty ratio may refer to the proportion of the low inversion point in all scan points.
Illustratively, S350 may include: determining the scanning point density corresponding to the third suspected obstacle based on the number of scanning points and the occupied area corresponding to the third suspected obstacle; determining the number of low reflection points corresponding to the low reflection points smaller than the preset reflectivity based on the reflectivity corresponding to each scanning point in the third suspected obstacle; and determining the low-inverse-point duty ratio corresponding to the third suspected obstacle based on the low-inverse-point number and the scanning point number corresponding to the third suspected obstacle.
Specifically, dividing the number of scanning points corresponding to a third suspected obstacle with the occupied area corresponding to the third suspected obstacle, and determining a result of the division as the scanning point density corresponding to the third suspected obstacle; determining scanning points with reflectivity smaller than the preset reflectivity as low-reflection points based on the reflectivity corresponding to each scanning point in the third suspected obstacle, and determining the number of the low-reflection points corresponding to the low-reflection points with reflectivity smaller than the preset reflectivity; and dividing the number of the low anti-points corresponding to the third suspected obstacle with the number of the scanning points corresponding to the third suspected obstacle, and determining the result of the division as the low anti-point duty ratio corresponding to the third suspected obstacle.
S360, determining a second suspected obstacle with the scanning point density meeting a preset density threshold and the low anti-point duty ratio meeting a preset proportion threshold from the third suspected obstacle.
Specifically, if the scanning point density corresponding to the third suspected obstacle meets the preset density threshold, judging whether the low anti-point duty ratio corresponding to the third suspected obstacle meets the preset proportion threshold. If so, the third suspected obstacle is determined to be a second suspected obstacle.
And S370, determining non-obstacle in the second suspected obstacle based on the position distribution information corresponding to the vehicle obstacle in the first suspected obstacle and the position distribution information corresponding to the second suspected obstacle.
According to the technical scheme, the size, the height and the occupied area corresponding to the first suspected obstacle are determined based on the position distribution information corresponding to the first suspected obstacle; determining a third suspected obstacle with a size meeting a preset size threshold value and a height meeting a preset height threshold value from the first suspected obstacle; determining the scanning point density and the low-reflection point duty ratio corresponding to the third suspected obstacle based on the number of scanning points, the occupied area and the reflectivity corresponding to the third suspected obstacle; and determining a second suspected obstacle with the scanning point density meeting a preset density threshold and the low anti-point duty ratio meeting a preset proportion threshold from the third suspected obstacle, so that the suspected obstacle meeting the physical attribute can be accurately determined by utilizing the physical attribute of the non-obstacle, such as the size, the height, the scanning point density and the low anti-point duty ratio, thereby accurately narrowing the screening range of the non-obstacle, further improving the accuracy of obstacle distinguishing and the safety of intelligent driving.
Based on the above technical solution, S370 may include: determining an obstacle distance between the vehicle obstacle and the second suspected obstacle based on the position distribution information corresponding to the vehicle obstacle in the first suspected obstacle and the position distribution information corresponding to the second suspected obstacle; and determining a second suspected obstacle with the obstacle distance smaller than the preset distance threshold value as a non-obstacle.
Wherein, the vehicle obstacle may refer to a vehicle adjacent to the periphery of the target vehicle. The obstacle distance may refer to a distance between two obstacles. In this embodiment, in order to determine whether the obstacle is a splash of water from a vehicle adjacent to the target vehicle, the obstacle distance between the vehicle obstacle and the second suspected obstacle is determined, and whether the second suspected obstacle is a splash of water that is caused or generated by the vehicle obstacle is determined by the obstacle distance.
It should be noted that, all the second suspected obstacles may be sequenced first, then each second suspected obstacle is traversed forward or backward along the driving direction of the vehicle obstacle, if there is a vehicle obstacle whose longitudinal distance is smaller than the set longitudinal distance threshold and whose transverse distance is also smaller than the set transverse distance threshold, it indicates that the second suspected obstacle is caused or generated by the vehicle obstacle, such as a splash splashed by the vehicle obstacle, the second suspected obstacle is determined to be a non-obstacle, so that the non-obstacle, such as a splash obstacle, is accurately screened out through the above steps, and then the obstacle and the non-obstacle are more accurately and effectively distinguished, so as to improve the safety of intelligent driving.
The following is an embodiment of an obstacle detecting apparatus provided in the present embodiment, which belongs to the same inventive concept as the obstacle detecting method of the above embodiments, and reference may be made to the embodiment of the obstacle detecting method for details that are not described in detail in the embodiment of the obstacle detecting apparatus.
Example III
Fig. 4 is a schematic structural diagram of an obstacle detecting apparatus according to a third embodiment of the present invention. As shown in fig. 4, the apparatus includes: the current driving surface determination module 410, the information acquisition module 420, the second suspected obstacle determination module 430, and the non-obstacle determination module 440.
The current driving road surface determining module 410 is configured to perform road surface recognition based on a current point cloud image of the target vehicle, and determine a current driving road surface; the information obtaining module 420 is configured to obtain location distribution information and reflectivity corresponding to a first suspected obstacle in the current point cloud image if the current driving road surface is a preset road surface; the second suspected obstacle determining module 430 is configured to determine a second suspected obstacle from the first suspected obstacles based on the position distribution information and the reflectivity corresponding to the first suspected obstacle; the non-obstacle determining module 440 is configured to determine a non-obstacle in the second suspected obstacle based on the position distribution information corresponding to the vehicle obstacle in the first suspected obstacle and the position distribution information corresponding to the second suspected obstacle.
According to the technical scheme, the road surface identification is carried out based on the current point cloud image of the target vehicle, so that the current running road surface is determined; if the current running road surface is a preset road surface, acquiring position distribution information and reflectivity corresponding to a first suspected obstacle in the current point cloud image, so that a screening mode of the non-obstacle can be determined based on the condition of the current running road surface; determining a second suspected obstacle from the first suspected obstacle based on the position distribution information and the reflectivity corresponding to the first suspected obstacle; based on the position distribution information corresponding to the vehicle obstacle in the first suspected obstacle and the position distribution information corresponding to the second suspected obstacle, determining the non-obstacle in the second suspected obstacle, so that the obstacle can be accurately judged in the intelligent driving or auxiliary driving process, the obstacle and the non-obstacle can be more accurately and effectively distinguished, and the intelligent driving safety is improved.
Alternatively, the current driving surface determination module 410 may include:
the ground point quantity determining submodule is used for determining the quantity of ground points in the current point cloud image of the target vehicle;
the current road surface state determining submodule is used for determining the current road surface state based on the number of the ground points.
Optionally, the current road surface state determination submodule is specifically configured to: if the number of the ground points is smaller than a preset ground point threshold value, determining that the current running road surface is a preset road surface; and if the number of the ground points is greater than or equal to the preset ground point threshold value, determining that the current running road surface is a non-preset road surface.
Optionally, the second suspected obstacle determining module 430 may include:
the attribute parameter determining submodule is used for determining the size, the height and the occupied area corresponding to the first suspected obstacle based on the position distribution information corresponding to the first suspected obstacle;
a third suspected obstacle determination submodule, configured to determine a third suspected obstacle whose size meets a preset size threshold from the first suspected obstacle and whose height meets a preset height threshold;
the proportion parameter determining submodule is used for determining the scanning point density and the low-inverse point duty ratio corresponding to the third suspected obstacle based on the scanning point number, the occupied area and the reflectivity corresponding to the third suspected obstacle;
the second suspected obstacle determining submodule is used for determining a second suspected obstacle, the scanning point density of which meets a preset density threshold, from the third suspected obstacle, and the low anti-point duty ratio of which meets a preset proportion threshold.
Optionally, the scaling parameter determination submodule is specifically configured to: determining the scanning point density corresponding to the third suspected obstacle based on the number of scanning points and the occupied area corresponding to the third suspected obstacle; determining the number of low reflection points corresponding to the low reflection points smaller than the preset reflectivity based on the reflectivity corresponding to each scanning point in the third suspected obstacle; and determining the low-inverse-point duty ratio corresponding to the third suspected obstacle based on the low-inverse-point number and the scanning point number corresponding to the third suspected obstacle.
Optionally, the non-obstacle determining module 440 is specifically configured to: determining an obstacle distance between the vehicle obstacle and the second suspected obstacle based on the position distribution information corresponding to the vehicle obstacle in the first suspected obstacle and the position distribution information corresponding to the second suspected obstacle; and determining a second suspected obstacle with the obstacle distance smaller than the preset distance threshold value as a non-obstacle.
Optionally, the apparatus further comprises:
the existence duration determining module is used for dynamically tracking the residual obstacle in the current point cloud image after determining the non-obstacle in the second suspected obstacle, and determining the existence duration corresponding to the residual obstacle;
the target obstacle determining module is used for detecting the residual obstacle based on the existence time length and determining the residual obstacle with the existence time length larger than the preset time length threshold value as the target obstacle.
The obstacle detection device provided by the embodiment of the invention can execute the obstacle detection method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of executing the obstacle detection method.
It should be noted that, in the above embodiment of obstacle detection, each unit and module included are only divided according to the functional logic, but not limited to the above division, so long as the corresponding function can be implemented; in addition, the specific names of the functional units are also only for distinguishing from each other, and are not used to limit the protection scope of the present invention.
Example IV
Fig. 5 shows a schematic diagram of the structure of an electronic device 10 that may be used to implement an embodiment of the invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic equipment may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 5, the electronic device 10 includes at least one processor 11, and a memory, such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, etc., communicatively connected to the at least one processor 11, in which the memory stores a computer program executable by the at least one processor, and the processor 11 may perform various appropriate actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from the storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data required for the operation of the electronic device 10 may also be stored. The processor 11, the ROM 12 and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
Various components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, etc.; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 11 performs the respective methods and processes described above, such as the obstacle detection method.
In some embodiments, the obstacle detection method may be implemented as a computer program tangibly embodied on a computer-readable storage medium, such as the storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into the RAM 13 and executed by the processor 11, one or more steps of the obstacle detection method described above may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform the obstacle detection method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for carrying out methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) through which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present invention may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solution of the present invention are achieved, and the present invention is not limited herein.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (10)

1. An obstacle detection method, comprising:
performing road surface identification based on the current point cloud image of the target vehicle, and determining a current running road surface;
if the current driving road surface is a preset road surface, acquiring position distribution information and reflectivity corresponding to a first suspected obstacle in the current point cloud image;
determining a second suspected obstacle from the first suspected obstacle based on the position distribution information and the reflectivity corresponding to the first suspected obstacle;
And determining non-obstacles in the second suspected obstacle based on the position distribution information corresponding to the vehicle obstacle in the first suspected obstacle and the position distribution information corresponding to the second suspected obstacle.
2. The method of claim 1, wherein the determining the current road surface state based on the road surface identification of the current point cloud image of the target vehicle comprises:
determining the number of ground points in the current point cloud image of the target vehicle;
and determining the current road surface state based on the number of ground points.
3. The method of claim 2, wherein the determining the current road surface state based on the number of ground points comprises:
if the number of the ground points is smaller than a preset ground point threshold value, determining that the current running road surface is a preset road surface;
and if the number of the ground points is larger than or equal to a preset ground point threshold value, determining that the current running road surface is a non-preset road surface.
4. The method of claim 1, wherein the determining a second suspected obstacle from the first suspected obstacle based on the location distribution information and reflectivity corresponding to the first suspected obstacle comprises:
Determining the size, the height and the occupied area corresponding to the first suspected obstacle based on the position distribution information corresponding to the first suspected obstacle;
determining a third suspected obstacle with a size meeting a preset size threshold value and a height meeting a preset height threshold value from the first suspected obstacle;
determining the scanning point density and the low-reflection point duty ratio corresponding to the third suspected obstacle based on the scanning point number, the occupied area and the reflectivity corresponding to the third suspected obstacle;
and determining a second suspected obstacle, wherein the density of the scanning points of the second suspected obstacle meets a preset density threshold value, and the low anti-point duty ratio of the second suspected obstacle meets a preset proportion threshold value.
5. The method of claim 4, wherein the determining the scan point density and the low inverse point duty cycle for the third suspected obstacle based on the number of scan points, the footprint, and the reflectivity for the third suspected obstacle comprises:
determining the scanning point density corresponding to the third suspected obstacle based on the number of scanning points and the occupied area corresponding to the third suspected obstacle;
determining the number of low reflection points corresponding to the low reflection points smaller than the preset reflectivity based on the reflectivity corresponding to each scanning point in the third suspected obstacle;
And determining the low-inverse-point duty ratio corresponding to the third suspected obstacle based on the low-inverse-point number and the scanning point number corresponding to the third suspected obstacle.
6. The method of claim 1, wherein the determining a non-obstacle in the second suspected obstacle based on the position distribution information corresponding to the vehicle obstacle in the first suspected obstacle and the position distribution information corresponding to the second suspected obstacle comprises:
determining an obstacle distance between the vehicle obstacle and the second suspected obstacle based on the position distribution information corresponding to the vehicle obstacle in the first suspected obstacle and the position distribution information corresponding to the second suspected obstacle;
and determining the second suspected obstacle with the obstacle distance smaller than a preset distance threshold as a non-obstacle.
7. The method of claim 1, wherein after determining a non-obstacle in the second suspected obstacle, the method further comprises:
dynamically tracking the residual obstacle in the current point cloud image, and determining the existence time length corresponding to the residual obstacle;
and detecting the residual obstacle based on the existing time length, and determining the residual obstacle with the existing time length larger than a preset time length threshold as a target obstacle.
8. An obstacle detecting apparatus, comprising:
the current driving road surface determining module is used for identifying the road surface based on the current point cloud image of the target vehicle and determining the current driving road surface;
the information acquisition module is used for acquiring position distribution information and reflectivity corresponding to a first suspected obstacle in the current point cloud image if the current running road surface is a preset road surface;
the second suspected obstacle determining module is used for determining a second suspected obstacle from the first suspected obstacle based on the position distribution information and the reflectivity corresponding to the first suspected obstacle;
the non-obstacle determination module is used for determining non-obstacles in the second suspected obstacle based on the position distribution information corresponding to the vehicle obstacle in the first suspected obstacle and the position distribution information corresponding to the second suspected obstacle.
9. An electronic device, the electronic device comprising:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the obstacle detection method as recited in any one of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the obstacle detection method according to any one of claims 1-7.
CN202311442015.9A 2023-11-01 2023-11-01 Obstacle detection method and device, electronic equipment and storage medium Pending CN117423091A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311442015.9A CN117423091A (en) 2023-11-01 2023-11-01 Obstacle detection method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311442015.9A CN117423091A (en) 2023-11-01 2023-11-01 Obstacle detection method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117423091A true CN117423091A (en) 2024-01-19

Family

ID=89524554

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311442015.9A Pending CN117423091A (en) 2023-11-01 2023-11-01 Obstacle detection method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117423091A (en)

Similar Documents

Publication Publication Date Title
US20210350149A1 (en) Lane detection method and apparatus,lane detection device,and movable platform
CN112541475B (en) Sensing data detection method and device
CN112509126B (en) Method, device, equipment and storage medium for detecting three-dimensional object
CN114677655A (en) Multi-sensor target detection method and device, electronic equipment and storage medium
CN116503803A (en) Obstacle detection method, obstacle detection device, electronic device and storage medium
CN115861959A (en) Lane line identification method and device, electronic equipment and storage medium
CN116469073A (en) Target identification method, device, electronic equipment, medium and automatic driving vehicle
CN115139303A (en) Grid well lid detection method, device, equipment and storage medium
CN114332487A (en) Image-based accumulated water early warning method, device, equipment, storage medium and product
CN117830642A (en) Target speed prediction method and device based on millimeter wave radar and storage medium
CN117036457A (en) Roof area measuring method, device, equipment and storage medium
CN115267782A (en) Dangerous area early warning method, device, equipment and medium based on microwave radar
CN115546764A (en) Obstacle detection method, device, equipment and storage medium
CN114581890B (en) Method and device for determining lane line, electronic equipment and storage medium
CN117423091A (en) Obstacle detection method and device, electronic equipment and storage medium
CN116091450A (en) Obstacle detection method, obstacle detection device, obstacle detection equipment, obstacle detection medium and obstacle detection product
CN115526837A (en) Abnormal driving detection method and device, electronic equipment and medium
CN115436900A (en) Target detection method, device, equipment and medium based on radar map
CN115376106A (en) Vehicle type identification method, device, equipment and medium based on radar map
CN114445802A (en) Point cloud processing method and device and vehicle
CN115424441B (en) Road curve optimization method, device, equipment and medium based on microwave radar
CN115140040B (en) Method and device for determining following target, electronic equipment and storage medium
CN117392000B (en) Noise removing method and device, electronic equipment and storage medium
CN117148837B (en) Dynamic obstacle determination method, device, equipment and medium
CN115440057B (en) Method, device, equipment and medium for detecting curve vehicle based on radar map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination