CN117496485A - Object detection method and device, electronic equipment and computer readable storage medium - Google Patents

Object detection method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN117496485A
CN117496485A CN202311615928.6A CN202311615928A CN117496485A CN 117496485 A CN117496485 A CN 117496485A CN 202311615928 A CN202311615928 A CN 202311615928A CN 117496485 A CN117496485 A CN 117496485A
Authority
CN
China
Prior art keywords
point cloud
point
position information
information
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311615928.6A
Other languages
Chinese (zh)
Inventor
蔡禹丞
刘浩
桂晨光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Qianshi Technology Co Ltd
Original Assignee
Beijing Jingdong Qianshi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Qianshi Technology Co Ltd filed Critical Beijing Jingdong Qianshi Technology Co Ltd
Priority to CN202311615928.6A priority Critical patent/CN117496485A/en
Publication of CN117496485A publication Critical patent/CN117496485A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks

Abstract

The disclosure provides an object detection method and device, electronic equipment and a computer readable storage medium, which can be applied to the fields of computer technology and automatic driving technology. The object detection method comprises the following steps: processing the point cloud information in response to receiving the point cloud information from the laser radar to obtain point cloud distribution information, wherein the point cloud distribution information comprises M pieces of position information and first point cloud sets corresponding to the M pieces of position information, and M is a positive integer; responding to the existence of points meeting a first preset condition with the height of the vehicle in the first point cloud set, and clustering the first point cloud set to obtain N clustered point cloud sets, wherein N is a positive integer; determining the heights of the objects corresponding to the N cluster point cloud sets respectively according to the N cluster point cloud sets; and respectively detecting the heights of the objects corresponding to the N cluster point cloud sets according to the heights of the vehicles to obtain object detection results.

Description

Object detection method and device, electronic equipment and computer readable storage medium
Technical Field
The present disclosure relates to the field of computer technology and autopilot technology, and more particularly, to an object detection method and apparatus, an electronic device, a computer-readable storage medium, and a computer program product.
Background
With the development of computer technology, autopilot technology has developed. In an autopilot scenario, there is a limit to the perception of the unmanned vehicle to the overhead obstacle due to the layout of the unmanned vehicle mounting the lidar and the characteristics of the lidar itself. The above-described problems can be generally avoided by a manner based on the improvement of radar configuration information and a manner based on the improvement of vehicle configuration information.
In the process of implementing the disclosed concept, the inventor finds that at least the following problems exist in the related art: the number of scanning lines of the laser radar is increased based on the mode of improving the radar configuration information, so that the cost for identifying the top obstacle is high, and the trafficability of the unmanned vehicle is reduced based on the mode of improving the vehicle configuration information, so that the accuracy of identifying the top obstacle cannot be effectively guaranteed.
Disclosure of Invention
In view of this, the present disclosure provides an object detection method and apparatus, an electronic device, a computer-readable storage medium, and a computer program product.
According to one aspect of the present disclosure, there is provided an object detection method including: processing the point cloud information in response to receiving the point cloud information from the laser radar to obtain point cloud distribution information, wherein the point cloud distribution information comprises M pieces of position information and first point cloud sets corresponding to the M pieces of position information, and M is a positive integer; responding to the first point cloud set to have points meeting a first preset condition with the height of the vehicle, and clustering the first point cloud set to obtain N clustered point cloud sets, wherein N is a positive integer; determining the heights of objects corresponding to the N clustering point cloud sets respectively according to the N clustering point cloud sets; and respectively detecting the object heights corresponding to the N cluster point cloud sets according to the vehicle heights to obtain object detection results.
According to another aspect of the present disclosure, there is provided an object detection apparatus including: the processing module is used for responding to the received point cloud information from the laser radar, processing the point cloud information to obtain point cloud distribution information, wherein the point cloud distribution information comprises M pieces of position information and first point cloud sets corresponding to the M pieces of position information, and M is a positive integer; the clustering module is used for responding to the points which are in the first point cloud set and meet the first preset condition with the height of the vehicle, and clustering the first point cloud set to obtain N clustered point cloud sets, wherein N is a positive integer; the first determining module is used for determining the heights of the objects corresponding to the N clustering point cloud sets according to the N clustering point cloud sets; and the detection module is used for respectively detecting the object heights corresponding to the N cluster point cloud sets according to the vehicle heights to obtain object detection results.
According to another aspect of the present disclosure, there is provided an electronic device including: one or more processors; and a memory for storing one or more instructions that, when executed by the one or more processors, cause the one or more processors to implement a method as described in the present disclosure.
According to another aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon executable instructions that, when executed by a processor, cause the processor to implement a method as described in the present disclosure.
According to another aspect of the present disclosure, there is provided a computer program product comprising computer executable instructions which, when executed, are adapted to carry out the method as described in the present disclosure.
According to another aspect of the present disclosure, an autonomous vehicle is provided, which may include an electronic device according to an embodiment of the present disclosure.
According to the embodiment of the disclosure, the point cloud information of the laser radar is processed, and the obtained first point cloud set corresponding to the grid position is clustered according to the vehicle height to obtain the clustered point cloud set, so that point clouds with similar characteristics in the first point cloud set can be effectively identified. The object height is obtained by processing the cluster point cloud set, so that the object height can be used for representing the obstacle height of the cluster point cloud set, and further the accuracy of the object detection result obtained according to the object height is improved. On the basis, the object detection result is automatically obtained by detecting the object height according to the vehicle height, so that the object detection efficiency is improved, and the real-time performance of obstacle identification and the safety in the automatic driving process are further improved.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent from the following description of embodiments thereof with reference to the accompanying drawings in which:
FIG. 1 schematically illustrates a system architecture to which an object detection method may be applied, according to an embodiment of the present disclosure;
FIG. 2 schematically illustrates a flow chart of an object detection method according to an embodiment of the disclosure;
FIG. 3 schematically illustrates an example schematic diagram of a process for processing point cloud information to obtain point cloud distribution information in response to receiving the point cloud information from a lidar according to an embodiment of the present disclosure;
FIG. 4 schematically illustrates an example schematic diagram of a process of clustering a first set of point clouds to obtain N clustered point clouds in response to the presence of points in the first set of point clouds meeting a first predetermined condition with vehicle height, in accordance with an embodiment of the disclosure;
FIG. 5 schematically illustrates an example schematic diagram of a process of determining object heights from N clustered point cloud sets, each corresponding to N clustered point cloud sets, according to an embodiment of the disclosure;
FIG. 6 schematically illustrates an example schematic diagram of an object detection process according to an embodiment of the disclosure;
FIG. 7 schematically illustrates a block diagram of an object detection apparatus according to an embodiment of the disclosure; and
Fig. 8 schematically illustrates a block diagram of an electronic device adapted to implement an object detection method according to an embodiment of the disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is only exemplary and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the present disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. In addition, in the following description, descriptions of well-known structures and techniques are omitted so as not to unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and/or the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It should be noted that the terms used herein should be construed to have meanings consistent with the context of the present specification and should not be construed in an idealized or overly formal manner.
Where expressions like at least one of "A, B and C, etc. are used, the expressions should generally be interpreted in accordance with the meaning as commonly understood by those skilled in the art (e.g.," a system having at least one of A, B and C "shall include, but not be limited to, a system having a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
In embodiments of the present disclosure, the collection, updating, analysis, processing, use, transmission, provision, disclosure, storage, etc., of the data involved (including, but not limited to, user personal information) all comply with relevant legal regulations, are used for legal purposes, and do not violate well-known. In particular, necessary measures are taken for personal information of the user, illegal access to personal information data of the user is prevented, and personal information security, network security and national security of the user are maintained.
In embodiments of the present disclosure, the user's authorization or consent is obtained before the user's personal information is obtained or collected.
For example, after collecting point cloud information of the lidar, your information may be desensitized in a manner including de-identification or anonymization to secure your information.
In an autopilot scenario, there is a limit to the perception of the drone to overhead obstacles due to the layout of the drone mounting the lidar and the characteristics of the lidar itself. Specifically, since the laser radar is generally in a line scanning mode, the included angle between every two laser beams is relatively large due to the limitation of the number of laser beams of the laser radar, when an obstacle with a specific height appears at the top of a distance, the lowest position of a detection radar signal sent by the laser radar may hit the obstacle only at a certain point, but the height of the point may exceed the height of an unmanned vehicle, so that the unmanned vehicle can judge that no danger can pass in front, but the practical situation is that the height of the obstacle is lower than the height of the vehicle, when the vehicle continuously advances, the unmanned vehicle detects the obstacle lower than the height of the vehicle along with the smaller and smaller interval between laser radar beams near, but the obstacle is very close to the unmanned vehicle at the moment, so that a rear vehicle rear-end collision accident is easily caused, and if the braking distance is not enough, the collision accident with the obstacle is caused.
In the related art, the above-described problems are generally avoided by a manner based on improving the vehicle configuration information and a manner based on improving the radar configuration information. The manner of improving vehicle configuration information may refer to increasing the minimum altitude conditions that an unmanned vehicle may pass in order to include laser radar induced altitude uncertainties. The manner of improving the radar configuration information may refer to increasing the number of lines of the lidar and increasing the density of the laser beam in the vertical direction in order to reduce the error in vision of the obstacle at a remote location.
However, in the system based on the improvement of the vehicle arrangement information, since the trafficability of the unmanned vehicle is reduced, if the obstacle height is lower than the determination height but higher than the unmanned vehicle height, it is determined that the unmanned vehicle cannot pass, and a block is easily formed when the unmanned vehicle encounters an unrepeatable road section. In the mode based on improving radar configuration information, the cost of the unmanned vehicle sensor is increased due to the fact that the scanning line number of the laser radar is increased.
In order to at least partially solve the technical problems in the related art, the present disclosure provides an object detection method and apparatus, an electronic device, and a computer-readable storage medium, which can be applied to the fields of computer technology and autopilot technology.
Fig. 1 schematically illustrates a system architecture to which an object detection method may be applied according to an embodiment of the present disclosure. It should be noted that fig. 1 is only an example of a system architecture to which embodiments of the present disclosure may be applied to assist those skilled in the art in understanding the technical content of the present disclosure, but does not mean that embodiments of the present disclosure may not be used in other devices, systems, environments, or scenarios.
As shown in fig. 1, a system architecture 100 according to this embodiment may include a first terminal device 101, a second terminal device 102, a third terminal device 103, a network 104, and a server 105. The network 104 is used as a medium to provide communication links between different devices. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
It should be noted that, the object detection method provided by the embodiments of the present disclosure may be generally performed by the server 105. Accordingly, the object detection apparatus provided by the embodiments of the present disclosure may be generally provided in the server 105.
Alternatively, the object detection method provided by the embodiment of the present disclosure may also be performed by the first terminal device 101, the second terminal device 102, or the third terminal device 103. Accordingly, the object detection apparatus provided by the embodiments of the present disclosure may also be provided in the first terminal device 101, the second terminal device 102, or the third terminal device 103.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
It should be noted that the sequence numbers of the respective operations in the following methods are merely representative of the operations for the purpose of description, and should not be construed as representing the order of execution of the respective operations. The method need not be performed in the exact order shown unless explicitly stated.
Fig. 2 schematically illustrates a flow chart of an object detection method according to an embodiment of the present disclosure.
As shown in fig. 2, the object detection method 200 includes operations S210 to S240.
In operation S2 1 0, in response to receiving the point cloud information from the lidar, the point cloud information is processed to obtain point cloud distribution information, where the point cloud distribution information includes M pieces of location information and first point cloud sets corresponding to the M pieces of location information, and M is a positive integer.
In operation S220, in response to the first point cloud set having a point satisfying the first predetermined condition with the vehicle height, the first point cloud set is clustered to obtain N clustered point cloud sets, where N is a positive integer.
In operation S230, object heights corresponding to the N cluster point cloud sets are determined according to the N cluster point cloud sets.
In operation S240, object heights corresponding to the N cluster point clouds are detected, respectively, according to the vehicle heights, to obtain object detection results.
According to an embodiment of the present disclosure, in the field of autopilot, a lidar may be deployed above a vehicle in order to detect obstacles around the vehicle. Lidar (i.e., laser Radar) refers to a Radar system that detects characteristic amounts of position, speed, etc. of a target obstacle with a Laser beam emitted. The laser radar can return point cloud information in the detection process. The point cloud information may refer to a data set of spatial points scanned by the laser radar apparatus, and each point cloud may include three-dimensional coordinates and laser reflection intensity. Three-dimensional coordinates may be used to characterize the position of the point cloud in space. The laser reflection intensity may be related to the surface texture, roughness, laser incidence angle, laser wavelength, and energy density of the lidar of the target obstacle.
According to the embodiment of the disclosure, after the point cloud information from the laser radar is received, the point cloud information can be segmented by using a point cloud segmentation method to obtain point cloud distribution information. The point cloud segmentation method can be configured according to actual service requirements, and is not limited herein. For example, the point cloud segmentation method may include at least one of: traditional point cloud segmentation methods and point cloud segmentation methods based on deep learning. The conventional point cloud segmentation method may include at least one of: an edge information-based segmentation method, a model fitting-based segmentation method, a region growing-based segmentation method, an attribute-based segmentation method, and a graph optimization-based segmentation method. The deep learning based method may include at least one of: projection-based segmentation methods, voxel-based segmentation methods, and point-based segmentation methods.
For example, the point cloud information may be processed based on a planar grid method, resulting in M pieces of position information and a first point cloud set corresponding to each of the M pieces of position information, in which case the position information may be used to characterize a position corresponding to the grid. Alternatively, three-dimensional point cloud information may be projected onto a two-dimensional plane to obtain M pieces of position information, and the M pieces of position information are processed based on a convolutional neural network (Convolutional Neural Networks, CNN) to obtain first point cloud sets corresponding to the M pieces of position information, where the position information may be used to characterize positions corresponding to the projected two-dimensional plane points.
According to an embodiment of the present disclosure, after obtaining point cloud distribution information, for each of M pieces of location information, a first point cloud set corresponding to the location information may be determined. An average height corresponding to the first set of point clouds is determined from at least one point cloud in the first set of point clouds. On this basis, it may be determined whether there is a point in the first set of point clouds satisfying a first predetermined condition with the vehicle height, based on the average height and the vehicle height. For example, in a case where it is determined that there is no point in the first set of point clouds that satisfies the first predetermined condition with the vehicle height, processing of the next frame of point cloud information may be continued. Alternatively, in the case that it is determined that there are points in the first point cloud set that satisfy the first predetermined condition with the vehicle height, the first point cloud set may be clustered to obtain N clustered point cloud sets. The first predetermined condition may be configured according to an actual service requirement, which is not limited herein. For example, the first predetermined condition may be set such that the average height is greater than the vehicle height.
According to the embodiment of the disclosure, the first point cloud sets can be clustered by using a point cloud clustering method, so that the first point cloud sets are grouped to obtain N clustered point cloud sets. The point cloud clustering method can be configured according to actual service requirements, and is not limited herein. For example, the point cloud clustering method may include at least one of: a distance-based point cloud clustering method, a density-based point cloud clustering method, a model-based point cloud clustering method and a graph theory-based point cloud clustering method. Specifically, the distance-based point cloud clustering method refers to a method of determining whether points belong to the same distance based on the distance between the points. The density-based point cloud clustering method refers to a method of determining clusters by calculating densities of points. The model-based point cloud clustering method is a method for fitting point cloud data into a mathematical model and clustering according to parameters of the mathematical model. The point cloud clustering method based on graph theory refers to a method for representing point cloud data into a graph form and clustering by utilizing connectivity of the graph.
According to an embodiment of the present disclosure, after obtaining N clustered point cloud sets, for each of the N clustered point cloud sets, for S candidate points in the clustered point cloud set, a first target point and a laser beam identifier corresponding to the first target point may be determined according to position information corresponding to each of the S candidate points. On the basis of this, the object height corresponding to the cluster point cloud set can be determined from the first target point and the laser beam identification. The object height may be used to characterize the height of the obstacle to be detected.
According to the embodiment of the disclosure, after the object heights corresponding to the N cluster point cloud sets are obtained, for each cluster point cloud set in the N cluster point cloud sets, the object heights corresponding to the cluster point cloud sets may be detected according to the vehicle height, so as to obtain an object detection result. The object detection results may be used to characterize whether the object is at risk. For example, in the case where the object height corresponding to the cluster point cloud set is smaller than the vehicle height, an object detection result that characterizes the object as risky may be determined. Alternatively, in the case where the object height corresponding to the cluster point cloud set is greater than or equal to the vehicle height, an object detection result that characterizes the object as not risky may be determined.
According to embodiments of the present disclosure, after obtaining an object detection result, a planning and control (Planning And Control, PNC) module may be utilized to flag the object as a suspected collision obstacle for the object detection result that characterizes the object as risky. In the event that a marker of a suspected collision obstacle is encountered while the vehicle is in motion, a strategy may be selected to preferentially bypass or slow the passage. On the basis, under the condition that the vehicle approaches an obstacle, if the height of the obstacle is larger than the height of the vehicle, the vehicle can normally pass; if the height of the obstacle is lower than the vehicle height, the vehicle can be braked under the low-speed condition, so that potential hazards of collision and rear-end collision accidents can be eliminated.
According to the embodiment of the disclosure, the point cloud information of the laser radar is processed, and the obtained first point cloud set corresponding to the grid position is clustered according to the vehicle height to obtain the clustered point cloud set, so that point clouds with similar characteristics in the first point cloud set can be effectively identified. The object height is obtained by processing the cluster point cloud set, so that the object height can be used for representing the obstacle height of the cluster point cloud set, and further the accuracy of the object detection result obtained according to the object height is improved. On the basis, the object detection result is automatically obtained by detecting the object height according to the vehicle height, so that the object detection efficiency is improved, and the real-time performance of obstacle identification and the safety in the automatic driving process are further improved.
An object detection method 200 according to an embodiment of the present invention is further described below with reference to fig. 3 to 6.
According to an embodiment of the present disclosure, the object detection method 200 may further include the following operations.
A vehicle height corresponding to the vehicle and radar configuration information corresponding to the lidar are determined. And determining a target included angle according to the radar configuration information, wherein the target included angle is used for representing the angle between every two adjacent laser beams.
According to embodiments of the present disclosure, in deploying the lidar, a vehicle height corresponding to the vehicle and radar configuration information corresponding to the lidar may be determined. The radar configuration information may include at least one of: ranging radius, sampling frequency, ranging resolution, and angular resolution.
In particular, the ranging radius may be used to characterize the ranging range of the lidar. The sampling frequency can be used for representing the times of laser radar completing laser emission and laser receiving in 1s, and the higher the sampling frequency is, the more times of scanning can be carried out, so that the higher the quality of the point cloud information is. The lower the ranging resolution, the higher the quality of the point cloud information. The angular resolution can be used to characterize the angle of two adjacent measurement points, the smaller the angular resolution, the smaller the object that can be scanned and thus the higher the quality of the point cloud information.
According to an embodiment of the present disclosure, after radar configuration information is obtained, the resolution of the lidar in the vertical direction may be determined, for example, from the ranging resolution and the angular resolution. On the basis of this, the target angle between every two adjacent laser beams can be determined according to the resolution in the vertical direction. Alternatively, the quality of the point cloud information can be improved by increasing the line number of the multi-line lidar and encrypting the scanning area of the point cloud in the vertical direction.
According to the embodiment of the disclosure, the vehicle height is predetermined, so that the object height can be detected by utilizing the vehicle height, and the accuracy of object detection is improved. The radar configuration information is predetermined, and the target included angle is determined according to the radar configuration information, so that the target included angle can be used for representing the angle between every two adjacent laser beams, the follow-up object detection according to the target included angle is facilitated, and the reliability of an object detection result is improved.
According to an embodiment of the present disclosure, the point cloud information includes laser beam identifications and candidate position information corresponding to each of the first number of candidate points.
According to an embodiment of the present disclosure, operation S210 may include the following operations.
Generating a grid map according to a predetermined size, wherein the grid map comprises M grids and position information corresponding to each of the M grids. And projecting the first number of candidate points to the grid graph according to the candidate position information corresponding to each of the first number of candidate points and the position information corresponding to each of the M grids to obtain a candidate point cloud set corresponding to each of the M position information. For each of the M pieces of location information, a point cloud set height corresponding to the candidate point cloud set is determined from the location information. And screening the candidate point cloud sets according to a preset threshold value and the point cloud set height to obtain a first point cloud set corresponding to the position information.
According to the embodiment of the disclosure, after the point cloud information is obtained, which laser beam of the multi-line laser radar the point cloud belongs to can be corresponding in the data, and the corresponding relation is provided by the laser radar, so that the laser beam identification included in the point cloud information can be determined.
According to the embodiment of the present disclosure, the predetermined size may be configured according to actual service requirements, which is not limited herein. The raster pattern may include one of the following: planar meshes, multi-layer meshes, and three-dimensional voxels. For example, M grids may be generated according to a predetermined size, and then the original first number of candidate points in the point cloud information may be projected into the corresponding grids according to the candidate position information corresponding to each of the first number of candidate points and the position information corresponding to each of the M grids.
According to the embodiment of the disclosure, after obtaining the candidate point cloud sets corresponding to each of the M position information, feature extraction may be performed on each of the candidate point cloud sets for each of the M candidate point cloud sets. The extracted point cloud features may be configured according to actual service requirements, which is not limited herein. For example, the point cloud features may include at least one of: average height, maximum height, height difference, and density. On this basis, the extracted point cloud features corresponding to each point cloud may be classified and the ground points marked according to a predetermined threshold.
According to an embodiment of the present disclosure, by projecting a first number of candidate points into a grid map generated according to a predetermined size, a candidate point cloud set corresponding to each of M pieces of position information can be obtained from the position information corresponding to each of M pieces of grids. On the basis, the height of the point cloud set corresponding to the candidate point cloud set can be determined according to the position information, the candidate point cloud set is screened by setting a preset threshold value, and the first point cloud set corresponding to the position information can be obtained, namely, the first point cloud set corresponding to the position information can be obtained effectively by generating a grid chart and projecting and screening the point cloud, so that the accuracy of subsequent object detection is improved.
Fig. 3 schematically illustrates an example schematic diagram of a process of processing point cloud information to obtain point cloud distribution information in response to receiving the point cloud information from a lidar according to an embodiment of the disclosure.
As shown in fig. 3, in 300, a raster pattern 302 may be generated according to a predetermined size 301, and the raster pattern 302 may include M grids 302_1 and position information 302_2 corresponding to each of the M grids 302_1.
The first number of candidate points are projected to the raster pattern according to the candidate position information 303 corresponding to each of the first number of candidate points and the position information 302_2 corresponding to each of the M grids 302_1, and a candidate point cloud set 304 corresponding to each of the M position information is obtained.
For each of the M position information 302_2, a point cloud set height 305 of the candidate point cloud set 304 corresponding to each of the M position information may be determined from the position information 302_2. On this basis, candidate point cloud sets 304 corresponding to each of the M position information may be filtered according to a predetermined threshold 306 and a point cloud set height 305, to obtain a first point cloud set 307 corresponding to the position information 302_2.
According to an embodiment of the present disclosure, the object detection method 200 may further include the following operations.
For each of the M pieces of location information, an average height corresponding to the first point cloud set is determined from the first point cloud set corresponding to the location information. In response to the average height being greater than the vehicle height, it is determined that there are points in the first set of point clouds that satisfy a first predetermined condition with the vehicle height.
According to an embodiment of the present disclosure, after obtaining the first point cloud sets corresponding to each of the M pieces of position information, an average height of the first point cloud sets corresponding to the position information may be determined for each of the M pieces of position information. The first predetermined condition may be configured according to an actual service requirement, which is not limited herein. For example, the first predetermined condition may be set as to whether the height of the point is higher than the ground plus the vehicle height. In the case where the average height is greater than the vehicle height, it may be determined that there is a point in the first set of point clouds that satisfies a first predetermined condition with the vehicle height.
According to the embodiment of the disclosure, the point cloud data are divided according to the grid positions, so that a plurality of different first point cloud sets can be obtained, for each first point cloud set, the average height is obtained by calculating the sum of the ordinate of all points in the point cloud, and the average height is compared with the vehicle height, so that whether the first point cloud set has the point meeting the first preset condition can be judged, the detection and identification of the target can be effectively realized, and the improvement of the safety of automatic driving is facilitated.
According to an embodiment of the disclosure, the first set of point clouds comprises P first candidate points, P being a positive integer.
According to an embodiment of the present disclosure, operation S220 may include the following operations.
And screening the P first candidate points according to the vehicle height and candidate position information corresponding to the P first candidate points to obtain Q second candidate points, wherein Q is a positive integer. A cluster scan radius and a predetermined point threshold are determined using a density-based clustering algorithm. And determining a second number of candidate points to be clustered corresponding to the target second candidate points in the Q second candidate points according to the cluster scanning radius aiming at each target second candidate point in the Q second candidate points. And responding to the number of candidate points to be clustered and the preset point threshold value to meet a second preset condition, and clustering the second number of candidate points to be clustered to obtain a clustered point cloud set.
According to an embodiment of the present disclosure, the density-based clustering algorithm may include one of: DBSCAN (Density-Based Spatial Clustering of Application with Noise, density-based noise application spatial clustering) algorithm, CFSFDP (Clustering by Fast Search and Find of Density Peaks, clustering based on fast search and discovery of Density peaks), and so forth.
For example, in the case where the density-based clustering algorithm is a DBSCAN algorithm, it is necessary to determine a cluster scan radius (i.e., eps) and a predetermined point threshold (i.e., minPts) of the set of clustered point clouds. The cluster scanning radius of the cluster point cloud set and the predetermined point threshold value can be set in a self-adaptive manner, and can also be set according to actual service requirements, and the method is not limited herein. For example, a distance matrix of all point clouds in the clustered point cloud set may be determined, an upper triangular matrix of the distance matrix may be obtained, and a clustered scan radius of the clustered point cloud set may be determined according to the size of each element value included in the distance matrix. Under the condition that the cluster scanning radius of the cluster point cloud set is the cluster scanning radius of the cluster point cloud set, the first point cloud set can be pre-clustered to obtain the number of point clouds respectively included in at least one pre-clustered point cloud set. And determining a predetermined point threshold according to the number of point clouds respectively included in at least one pre-clustered point cloud set. For example, an average value determined from the number of point clouds each included in at least one pre-clustered point cloud set may be determined as the predetermined point threshold.
Alternatively, for P first candidate points corresponding to the first point cloud set, a point P that is not visited (i.e., unvisited) may be determined from the P first candidate points, and according to the cluster scan radius, all nearby points, i.e., a second number of candidate points to be clustered, whose distance from the point P is within the cluster scan radius are determined. If the number of the second number of candidate points to be clustered is greater than or equal to the predetermined point number threshold, the point p and the second number of candidate points to be clustered can be clustered, namely, the current point and the nearby points form a cluster to obtain a cluster point cloud set, and the point p is marked as accessed (i.e. visible). On this basis, all points within the cluster that are not marked as having been accessed can be recursively processed in the same way, expanding the cluster. If the number of nearby points is less than the predetermined point number threshold, point p may be temporarily marked as a noise point. If the cluster is sufficiently expanded, i.e., all points within the cluster are marked as accessed, points that are not accessed can be treated in the same manner.
According to the embodiment of the disclosure, since the second candidate points are obtained by screening the first candidate points according to the height of the vehicle, on the basis, the target second candidate points are determined in the second candidate points, the candidate points to be clustered corresponding to the target second candidate points are determined in the second candidate points according to the cluster scanning radius determined by the density-based clustering algorithm, and the number of the candidate points to be clustered is checked according to the preset point threshold determined by the density-based clustering algorithm, so that clustering noise occurring in the clustering process can be eliminated, the quality of a cluster point cloud set is improved, and the accuracy of candidate object detection is further improved.
Fig. 4 schematically illustrates an example schematic diagram of a process of clustering a first point cloud set to obtain N clustered point cloud sets in response to the presence of a point in the first point cloud set meeting a first predetermined condition with a vehicle height, according to an embodiment of the disclosure.
As shown in fig. 4, in 400, for each location information 401 of the M location information 401, an average height 403 corresponding to the first set of point clouds 402 may be determined from the first set of point clouds 402 corresponding to the location information 401. After the average height 403 is obtained, operation S410 may be performed.
In operation S410, it is determined whether the average height is greater than the vehicle height? If not, processing may continue for the next location information 401. If so, it may be determined that there is a point 404 in the first set of point clouds 402 that meets a first predetermined condition with the vehicle height 406. The P first candidate points may be screened according to the vehicle height 404 and candidate position information 405 corresponding to each of the P first candidate points, to obtain Q second candidate points 407.
Using a density-based clustering algorithm, a cluster scan radius 409 and a predetermined point threshold are determined. For each target second candidate point 407 of the Q second candidate points 407, a second number of candidate points 410 to be clustered corresponding to the target second candidate point may be determined among the Q second candidate points 407 according to the cluster scanning radius 409. After the second number of candidate points to be clustered 410 is obtained, operation S420 may be performed.
In operation S420, it is determined whether the number of candidate points to be clustered and the predetermined point number threshold satisfy a second predetermined condition? If not, the flow may end. If so, a second number of candidate points 410 to be clustered may be clustered to obtain a cluster point cloud set 411.
According to an embodiment of the present disclosure, the cluster point cloud set includes S candidate points, S being a positive integer.
According to an embodiment of the present disclosure, operation S230 may include the following operations.
And determining a first target point and a laser beam identifier corresponding to the first target point according to the position information corresponding to each of the S candidate points for each of the N cluster point cloud sets. And determining the position information of the second target point according to the first target point and the laser beam mark. And determining the object height corresponding to the cluster point cloud set according to the position information of the second target point.
According to embodiments of the present disclosure, the first target point may refer to the lowest point in the set of clustered point clouds. After the cluster point cloud set is obtained, the first target point with the lowest position can be determined in the S candidate points according to the position information corresponding to the S candidate points in the cluster point cloud set. On the basis of this, the laser beam identification corresponding to the first target point can be determined.
According to an embodiment of the present disclosure, after obtaining the laser beam mark, the position information of the second target point may be determined according to the first target point and the laser beam mark corresponding to the first target point. On the basis, the object height corresponding to the cluster point cloud set can be further determined according to the position information of the second target point. The object height is determined by the following formula (1).
H 5 =H 3 -H 1 (1)
Wherein H is 5 Characterizing object height corresponding to cluster point clouds, H 3 Characterizing position information of a second target point H 1 Characterizing vehicle height.
According to the embodiments of the present disclosure, by determining the first target point and the laser beam identification corresponding to the first target point according to the position information corresponding to each of the S candidate points, the position of the object in the point cloud data and the laser beam associated therewith can be determined more accurately. By analyzing the positional relationship between the laser beam and the adjacent point, the positional information of the second target point can be obtained, thereby realizing more accurate positioning. On this basis, since the object height is determined based on the position information of the second target point, the accuracy of object detection can be further improved.
According to an embodiment of the present disclosure, determining the position information of the second target point according to the first target point and the laser beam identification may include the following operations.
A target laser beam signature adjacent to the laser beam signature is determined. And projecting the first target point to a target laser beam corresponding to the target laser beam mark to obtain a second target point. And determining the position information of the second target point according to the target included angle, the position information of the first target point and the horizontal position information between the vehicle and the object.
According to embodiments of the present disclosure, after obtaining a laser beam signature, a target laser beam signature adjacent to the laser beam signature may be determined from radar configuration information. A first target point on the laser beam corresponding to the laser beam mark may be projected downward onto the target laser beam corresponding to the target laser beam mark according to the same position of the first target point, to obtain an intersection point with the target laser beam, i.e., a second target point.
According to an embodiment of the present disclosure, the position information of the second target point may be determined according to a target angle between the laser beam and the target laser beam, the position information of the first target point, and the horizontal position information between the vehicle and the object. The position information of the second target point is determined as shown in the following formulas (2) and (3).
L 1 =2×tan(A/2)×L 2 (2)
L 1 =H 2 -H 3 (3)
Wherein L is 1 Characterizing the distance between the first target point and the second target point, H 2 Characterizing position information of a first target point, H 3 Representing the position information of a second target point, A represents the target included angle, L 2 Horizontal position information between the vehicle and the object is characterized.
According to the embodiments of the present disclosure, since the second target point is obtained by projecting the first target point to the target laser beam corresponding to the target laser beam mark, the target laser beam mark is adjacent to the laser beam mark, whereby automatic determination of the second target point can be achieved. On the basis, the position information of the second target point can be determined according to the predetermined target included angle, the position information of the first target point and the horizontal position information between the vehicle and the object, so that the position information of the second target point is beneficial to the subsequent determination of the height of the object, and the efficiency of object detection is improved.
Fig. 5A schematically illustrates an example schematic diagram of a process of determining an object height in the related art according to an embodiment of the present disclosure.
As shown in fig. 5A, in 500A, a laser radar 502 is disposed on an unmanned vehicle 501, and a process of determining the height of an object is described by taking the laser radar 502 and an obstacle 503 as an example.
Among the plurality of laser beams emitted from the laser radar 502, for example, the lowest laser beam 504 can detect the point a on the obstacle 503, and since the height of the point a is higher than that of the unmanned vehicle 501, the unmanned vehicle 501 can determine that there is no danger in front to pass through.
In practice, the height of the obstacle 503 is lower than that of the unmanned vehicle 501, so when the unmanned vehicle 501 continuously advances, as the distance between the multiple laser beams emitted by the laser radar 502 near the unmanned vehicle is smaller, the unmanned vehicle 501 can detect the obstacle 503 lower than that of the unmanned vehicle 501, but at this time, the obstacle 503 is already closer to the unmanned vehicle 501, so that the unmanned vehicle 501 is rapidly braked, the rear-end collision accident of the rear-end vehicle is easily caused, and if the braking distance is insufficient, the collision accident with the obstacle is caused.
Fig. 5B schematically illustrates an example schematic diagram of a process of determining object heights corresponding to each of N clustered point cloud sets from the N clustered point cloud sets according to an embodiment of the disclosure.
As shown in fig. 5B, in 500B, a laser radar 507 is deployed on an unmanned vehicle 506, and a process of determining the object height corresponding to the cluster point cloud set is described by taking the laser radar 507 and an obstacle 508 as an example.
And determining the lowest point in the cluster point cloud set and the laser beam mark corresponding to the lowest point, namely the first target point B and the laser beam 509, according to the position information which is included in the cluster point cloud set and corresponds to each of the S candidate points. The target laser beam identity adjacent to the laser beam identity, i.e. the laser beam 510 adjacent to the laser beam 509, may be determined from the laser beam 509. On the basis of this, a first target point B may be projected onto the laser beam 510 to obtain a focal point, i.e. a second target point C.
The angle between the laser beam 509 and the laser beam 510, the angle BDC, is the target angle θ, and it can be known that the triangle BCD is an isosceles triangle, and then the positional information of the second target point C can be calculated using the equations (2) and (3) described above.
According to the embodiment of the disclosure, through the characteristics of the multi-line scanning laser radar, whether the top obstacle is lower than the vehicle height or not can be judged according to the angular resolution of two laser lines, the distance between the obstacle and the vehicle and the height between the obstacle and the ground, so that the limitation of passing of an unmanned vehicle in a top obstacle detection scene in the related art is at least partially overcome, the unmanned vehicle can smoothly detect and pass the detection of the top obstacle under the condition that the cost is not increased, the sudden braking phenomenon caused by the fact that the speed of the too close top obstacle is higher is avoided, and the potential hidden danger of collision accidents and rear-end collision accidents is eliminated.
Fig. 6 schematically illustrates an example schematic diagram of an object detection process according to an embodiment of the disclosure.
As shown in fig. 6, in 600, for each cluster point cloud set 601 of the N cluster point cloud sets 601, a first target point 603 and a laser beam identification 604 corresponding to the first target point 603 may be determined according to position information 602 corresponding to each of the S candidate points.
A target laser beam mark 605 adjacent to the laser beam mark 604 may be determined. The first target point 603 is projected onto a target laser beam corresponding to the target laser beam mark 605, resulting in a second target point 606. The position information 607 of the second target point is determined based on the target angle, the position information of the first target point, and the horizontal position information between the vehicle and the object. On the basis of this, the object height 608 corresponding to the cluster point cloud 601 is determined from the position information 607 of the second target point.
According to an embodiment of the present disclosure, operation S240 may include the following operations.
For each of the N cluster point cloud sets, determining an object detection result that characterizes the object as being at risk if the object height corresponding to the cluster point cloud set is less than the vehicle height. And determining an object detection result representing that the object is not at risk under the condition that the object height corresponding to the cluster point cloud set is greater than or equal to the vehicle height.
According to embodiments of the present disclosure, object detection results may be used to characterize whether an object is at risk. After the object heights corresponding to the cluster point cloud sets are obtained, a magnitude relationship between the object heights and the vehicle heights may be determined.
For example, in the case where the object height is smaller than the vehicle height, i.e., there is a risk of collision, an object detection result that characterizes the object as being at risk may be determined, and the obstacle may be marked as a suspected collision obstacle. Alternatively, in the case where the object height is greater than or equal to the vehicle height, i.e., there is no risk of collision, an object detection result that characterizes the object as not risky may be determined, and determination may be continued on other cluster point cloud sets.
According to the embodiment of the disclosure, by determining the object detection result representing that the object is at risk under the condition that the object height corresponding to the cluster point cloud set is smaller than the vehicle height, an automatic driving system or an intelligent transportation system can be helped to more accurately identify and judge potential dangerous objects so as to take measures early to avoid accidents. By determining the object detection result representing that the object is not at risk under the condition that the object height corresponding to the cluster point cloud set is greater than or equal to the vehicle height, an automatic driving system or an intelligent traffic system can be helped to recognize and filter irrelevant targets more quickly, so that the driving safety can be ensured, and meanwhile, the traffic efficiency is improved.
The above is only an exemplary embodiment, but is not limited thereto, and other object detection methods known in the art may be included as long as the efficiency and accuracy of object detection can be improved.
Fig. 7 schematically illustrates a block diagram of an object detection apparatus according to an embodiment of the present disclosure.
As shown in fig. 7, the object detection apparatus 700 may include a processing module 710, a clustering module 720, a first determining module 730, and a detection module 740.
The processing module 710 is configured to process the point cloud information in response to receiving the point cloud information from the lidar, and obtain point cloud distribution information, where the point cloud distribution information includes M pieces of location information and first point cloud sets corresponding to the M pieces of location information, and M is a positive integer.
The clustering module 720 is configured to cluster the first point cloud set in response to the presence of a point in the first point cloud set that meets a first predetermined condition with the vehicle height, to obtain N clustered point cloud sets, where N is a positive integer.
The first determining module 730 is configured to determine, according to the N cluster point cloud sets, heights of objects corresponding to the N cluster point cloud sets respectively.
The detection module 740 is configured to detect heights of objects corresponding to the N cluster point clouds respectively according to the heights of the vehicle, so as to obtain an object detection result.
According to an embodiment of the present disclosure, the object detection apparatus 700 may further include a second determination module and a third determination module.
And the second determining module is used for determining the average height corresponding to the first point cloud set according to the first point cloud set corresponding to the position information for each piece of position information in the M pieces of position information.
And a third determining module for determining that there is a point in the first set of point clouds satisfying a first predetermined condition with the vehicle height in response to the average height being greater than the vehicle height.
According to an embodiment of the disclosure, the first set of point clouds comprises P first candidate points, P being a positive integer.
According to an embodiment of the present disclosure, the clustering module 720 may include a first filtering unit, a first determining unit, a second determining unit, and a clustering unit.
And the first screening unit is used for screening the P first candidate points according to the height of the vehicle and the candidate position information corresponding to the P first candidate points to obtain Q second candidate points, wherein Q is a positive integer.
And the first determining unit is used for determining a cluster scanning radius and a preset point number threshold value by using a density-based clustering algorithm.
And a second determining unit, configured to determine, for each target second candidate point of the Q second candidate points, a second number of candidate points to be clustered corresponding to the target second candidate point among the Q second candidate points according to the cluster scanning radius.
And the clustering unit is used for responding to the number of candidate points to be clustered and the preset point number threshold value to meet a second preset condition, and clustering the second number of candidate points to be clustered to obtain a clustered point cloud set.
According to an embodiment of the present disclosure, the object detection apparatus 700 may further include a fourth determination module and a fifth determination module.
And a fourth determining module for determining a vehicle height corresponding to the vehicle and radar configuration information corresponding to the lidar.
And a fifth determining module, configured to determine a target included angle according to the radar configuration information, where the target included angle is used to characterize an angle between every two adjacent laser beams.
According to an embodiment of the present disclosure, the point cloud information includes laser beam identifications and candidate position information corresponding to each of the first number of candidate points.
According to an embodiment of the present disclosure, the processing module 710 may include a generating unit, a projecting unit, a third determining unit, and a second screening unit.
And the generating unit is used for generating a grid chart according to a preset size, wherein the grid chart comprises M grids and position information corresponding to the M grids.
And the projection unit is used for projecting the first number of candidate points to the grid graph according to the candidate position information corresponding to the first number of candidate points and the position information corresponding to the M grids, so as to obtain candidate point cloud sets corresponding to the M position information.
And a third determining unit configured to determine, for each of the M pieces of position information, a point cloud set height corresponding to the candidate point cloud set according to the position information.
And the second screening unit is used for screening the candidate point cloud sets according to the preset threshold value and the point cloud set height to obtain a first point cloud set corresponding to the position information.
According to an embodiment of the present disclosure, the cluster point cloud set includes S candidate points, S being a positive integer.
According to an embodiment of the present disclosure, the determining module 730 may include a fourth determining unit, a fifth determining unit, and a sixth determining unit.
And a fourth determining unit, configured to determine, for each of the N clustered point cloud sets, a first target point and a laser beam identifier corresponding to the first target point according to the position information corresponding to each of the S candidate points.
And a fifth determining unit for determining the position information of the second target point according to the first target point and the laser beam mark.
And a sixth determining unit, configured to determine an object height corresponding to the cluster point cloud set according to the position information of the second target point.
According to an embodiment of the present disclosure, the fifth determining unit may include a first determining subunit, a projection subunit, and a second determining subunit.
A first determining subunit for determining a target laser beam mark adjacent to the laser beam mark.
And the projection subunit is used for projecting the first target point to the target laser beam corresponding to the target laser beam mark to obtain a second target point.
And the second determining subunit is used for determining the position information of the second target point according to the target included angle, the position information of the first target point and the horizontal position information between the vehicle and the object.
According to an embodiment of the present disclosure, the detection module 740 may include a seventh determination unit and an eighth determination unit.
A seventh determining unit, configured to determine, for each of the N cluster point cloud sets, an object detection result that characterizes that the object is at risk, in a case where an object height corresponding to the cluster point cloud set is smaller than a vehicle height.
And an eighth determining unit for determining an object detection result representing that the object is not at risk in the case that the object height corresponding to the cluster point cloud set is greater than or equal to the vehicle height.
Any number of modules, sub-modules, units, sub-units, or at least some of the functionality of any number of the sub-units according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented as split into multiple modules. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented at least in part as hardware circuitry, or in any one of or in any suitable combination of three implementations in software, hardware, and firmware. Alternatively, one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be at least partially implemented as computer program modules, which when executed, may perform the corresponding functions.
It should be noted that, in the embodiment of the present disclosure, the object detection device portion corresponds to the object detection method portion in the embodiment of the present disclosure, and the description of the object detection device portion specifically refers to the object detection method portion, which is not described herein.
Fig. 8 schematically illustrates a block diagram of an electronic device adapted to implement an object detection method according to an embodiment of the disclosure. The electronic device shown in fig. 8 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 8, a computer electronic device 800 according to an embodiment of the present disclosure includes a processor 801 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 802 or a program loaded from a storage section 809 into a Random Access Memory (RAM) 803. The processor 801 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or an associated chipset and/or special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), or the like. The processor 801 may also include on-board memory for caching purposes. The processor 801 may include a single processing unit or multiple processing units for performing the different actions of the method flows according to embodiments of the disclosure.
In the RAM 803, various programs and data required for the operation of the electronic device 800 are stored. The processor 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804.
According to an embodiment of the present disclosure, the electronic device 800 may also include an input/output (I/O) interface 805, the input/output (I/O) interface 805 also being connected to the bus 804. The electronic device 800 may also include one or more of the following components connected to an input/output (I/O) interface 805: an input portion 806 including a keyboard, mouse, etc.; an output portion 807 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and a speaker; a storage section 808 including a hard disk or the like; and a communication section 809 including a network interface card such as a LAN card, a modem, or the like. The communication section 809 performs communication processing via a network such as the internet. The drive 810 is also connected to an input/output (I/O) interface 805 as needed. A removable medium 811 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 810 as needed so that a computer program read out therefrom is mounted into the storage section 808 as needed.
The present disclosure also provides a computer-readable storage medium that may be embodied in the apparatus/device/system described in the above embodiments; or may exist alone without being assembled into the apparatus/device/system. The computer-readable storage medium carries one or more programs which, when executed, implement methods in accordance with embodiments of the present disclosure.
In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
Embodiments of the present disclosure also include a computer program product comprising a computer program comprising program code for performing the methods provided by the embodiments of the present disclosure, the program code for causing an electronic device to implement the object detection methods provided by the embodiments of the present disclosure when the computer program product is run on the electronic device.
The above-described functions defined in the system/apparatus of the embodiments of the present disclosure are performed when the computer program is executed by the processor 801. The systems, apparatus, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the disclosure.
According to embodiments of the present disclosure, program code for executing a computer program provided by embodiments of the present disclosure may be written in any combination of one or more programming languages.
According to an embodiment of the present disclosure, an autonomous vehicle is provided, which may include an electronic device according to an embodiment of the present disclosure.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures.
The embodiments of the present disclosure are described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described above separately, this does not mean that the measures in the embodiments cannot be used advantageously in combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be made by those skilled in the art without departing from the scope of the disclosure, and such alternatives and modifications are intended to fall within the scope of the disclosure.

Claims (13)

1. An object detection method, comprising:
processing the point cloud information in response to receiving the point cloud information from the laser radar to obtain point cloud distribution information, wherein the point cloud distribution information comprises M pieces of position information and first point cloud sets corresponding to the M pieces of position information, and M is a positive integer;
Responding to the first point cloud set to have points meeting a first preset condition with the height of the vehicle, and clustering the first point cloud set to obtain N clustered point cloud sets, wherein N is a positive integer;
determining the heights of the objects corresponding to the N cluster point cloud sets respectively according to the N cluster point cloud sets; and
and respectively detecting the heights of the objects corresponding to the N clustering point cloud sets according to the heights of the vehicles to obtain object detection results.
2. The method of claim 1, further comprising, prior to clustering the first point cloud set to obtain N clustered point cloud sets in response to a point in the first point cloud set satisfying a first predetermined condition with vehicle height:
for each of the M pieces of location information,
determining an average height corresponding to the first point cloud set according to the first point cloud set corresponding to the position information; and
in response to the average height being greater than the vehicle height, determining that there is a point in the first set of point clouds that meets the first predetermined condition with vehicle height.
3. The method of claim 1, wherein the first set of point clouds comprises P first candidate points, P being a positive integer;
The responding to the first point cloud set that the points meeting the first preset condition with the vehicle height exist, clustering the first point cloud set, and obtaining N clustered point cloud sets comprises the following steps:
screening the P first candidate points according to the vehicle height and candidate position information corresponding to the P first candidate points to obtain Q second candidate points, wherein Q is a positive integer;
determining a cluster scanning radius and a preset point threshold value by using a density-based clustering algorithm;
for each target second candidate point of the Q second candidate points,
determining a second number of candidate points to be clustered corresponding to the target second candidate point in the Q second candidate points according to the cluster scanning radius; and
and responding to the number of the candidate points to be clustered and the preset point threshold value to meet a second preset condition, and clustering the second number of candidate points to be clustered to obtain the clustering point cloud set.
4. A method according to any one of claims 1 to 3, further comprising, prior to said processing of said point cloud information in response to receiving point cloud information from a lidar, obtaining point cloud distribution information:
Determining the vehicle height corresponding to a vehicle and radar configuration information corresponding to the laser radar; and
and determining a target included angle according to the radar configuration information, wherein the target included angle is used for representing the angle between every two adjacent laser beams.
5. The method of claim 4, wherein the point cloud information includes laser beam identifications and candidate location information corresponding to each of a first number of candidate points;
the responding to the receiving of the point cloud information from the laser radar, the processing of the point cloud information, the obtaining of the point cloud distribution information comprises the following steps:
generating a grid map according to a preset size, wherein the grid map comprises the M grids and position information corresponding to the M grids;
projecting the first number of candidate points to the grid graph according to the candidate position information corresponding to each of the first number of candidate points and the position information corresponding to each of the M grids to obtain candidate point cloud sets corresponding to each of the M position information;
for each of the M pieces of location information,
determining a point cloud set height corresponding to the candidate point cloud set according to the position information; and
And screening the candidate point cloud set according to a preset threshold value and the point cloud set height to obtain the first point cloud set corresponding to the position information.
6. A method according to any one of claims 1 to 3, wherein the cluster point cloud set comprises S candidate points, S being a positive integer;
the determining, according to the N cluster point cloud sets, the object heights corresponding to the N cluster point cloud sets respectively includes:
for each of the N sets of clustered point clouds,
determining a first target point and a laser beam mark corresponding to the first target point according to the position information corresponding to each of the S candidate points;
determining the position information of a second target point according to the first target point and the laser beam mark; and
and determining the object height corresponding to the cluster point cloud set according to the position information of the second target point.
7. The method of claim 6, wherein the determining location information of a second target point from the first target point and the laser beam identification comprises:
determining a target laser beam mark adjacent to the laser beam mark;
Projecting the first target point to a target laser beam corresponding to the target laser beam mark to obtain a second target point; and
and determining the position information of the second target point according to the target included angle, the position information of the first target point and the horizontal position information between the vehicle and the object.
8. The method of claim 7, wherein the detecting the object heights corresponding to the N cluster point clouds respectively according to the vehicle heights, respectively, to obtain object detection results includes:
for each of the N sets of clustered point clouds,
determining an object detection result representing that the object is at risk under the condition that the object height corresponding to the cluster point cloud set is smaller than the vehicle height; and
and determining an object detection result representing that the object is not at risk under the condition that the object height corresponding to the cluster point cloud set is greater than or equal to the vehicle height.
9. An object detection apparatus comprising:
the processing module is used for responding to the received point cloud information from the laser radar, processing the point cloud information and obtaining point cloud distribution information, wherein the point cloud distribution information comprises M pieces of position information and first point cloud sets corresponding to the M pieces of position information, and M is a positive integer;
The clustering module is used for responding to the points which are in the first point cloud set and meet the first preset condition with the height of the vehicle, and clustering the first point cloud set to obtain N clustered point cloud sets, wherein N is a positive integer;
the first determining module is used for determining the heights of the objects corresponding to the N clustering point cloud sets respectively according to the N clustering point cloud sets; and
and the detection module is used for respectively detecting the heights of the objects corresponding to the N cluster point cloud sets according to the heights of the vehicles to obtain object detection results.
10. An electronic device, comprising:
one or more processors;
a memory for storing one or more instructions,
wherein the one or more instructions, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1 to 8.
11. A computer readable storage medium having stored thereon executable instructions which when executed by a processor cause the processor to implement the method of any of claims 1 to 8.
12. A computer program product comprising computer executable instructions for implementing the method of any one of claims 1 to 8 when executed.
13. An autonomous vehicle comprising the electronic device of claim 10.
CN202311615928.6A 2023-11-29 2023-11-29 Object detection method and device, electronic equipment and computer readable storage medium Pending CN117496485A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311615928.6A CN117496485A (en) 2023-11-29 2023-11-29 Object detection method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311615928.6A CN117496485A (en) 2023-11-29 2023-11-29 Object detection method and device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN117496485A true CN117496485A (en) 2024-02-02

Family

ID=89684970

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311615928.6A Pending CN117496485A (en) 2023-11-29 2023-11-29 Object detection method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN117496485A (en)

Similar Documents

Publication Publication Date Title
CN109144097B (en) Obstacle or ground recognition and flight control method, device, equipment and medium
WO2018068653A1 (en) Point cloud data processing method and apparatus, and storage medium
Negru et al. Image based fog detection and visibility estimation for driving assistance systems
JP5822255B2 (en) Object identification device and program
CN111874006A (en) Route planning processing method and device
CN110674705A (en) Small-sized obstacle detection method and device based on multi-line laser radar
CN115273039B (en) Small obstacle detection method based on camera
CN114118252A (en) Vehicle detection method and detection device based on sensor multivariate information fusion
CN111638520A (en) Obstacle recognition method, obstacle recognition device, electronic device and storage medium
US20220171975A1 (en) Method for Determining a Semantic Free Space
CN113536867B (en) Object identification method, device and system
US8483478B1 (en) Grammar-based, cueing method of object recognition, and a system for performing same
CN113432615A (en) Detection method and system based on multi-sensor fusion drivable area and vehicle
US20220404503A1 (en) Three-dimensional object detection with ground removal intelligence
CN112639822A (en) Data processing method and device
CN117496485A (en) Object detection method and device, electronic equipment and computer readable storage medium
WO2021199584A1 (en) Detecting debris in a vehicle path
CN115359332A (en) Data fusion method and device based on vehicle-road cooperation, electronic equipment and system
CN115457505A (en) Small obstacle detection method, device and equipment for camera and storage medium
CN112286178B (en) Identification system, vehicle control system, identification method, and storage medium
CN114581615B (en) Data processing method, device, equipment and storage medium
WO2024042607A1 (en) External world recognition device and external world recognition method
CN117315306A (en) Object detection method, device and storage medium
CN117636098A (en) Model training, target detection and vehicle control methods, devices, equipment and media
CN117784168A (en) Construction area sensing method, device, equipment and storage medium in automatic driving

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination