CN117315306A - Object detection method, device and storage medium - Google Patents

Object detection method, device and storage medium Download PDF

Info

Publication number
CN117315306A
CN117315306A CN202311044445.5A CN202311044445A CN117315306A CN 117315306 A CN117315306 A CN 117315306A CN 202311044445 A CN202311044445 A CN 202311044445A CN 117315306 A CN117315306 A CN 117315306A
Authority
CN
China
Prior art keywords
point
target
point cloud
target point
ground
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311044445.5A
Other languages
Chinese (zh)
Inventor
华智
曾圣尧
胡来丰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Zero Run Technology Co Ltd
Original Assignee
Zhejiang Zero Run Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Zero Run Technology Co Ltd filed Critical Zhejiang Zero Run Technology Co Ltd
Priority to CN202311044445.5A priority Critical patent/CN117315306A/en
Publication of CN117315306A publication Critical patent/CN117315306A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a target object detection method, equipment and a storage medium, wherein the method comprises the following steps: acquiring a target point cloud, wherein the target point cloud is acquired from a detection area by using a detection device; performing dimension reduction mapping on the target point cloud to obtain two-dimensional point data, wherein two-dimensional points contained in the two-dimensional point data are distributed in rows and columns; clustering each two-dimensional point in the two-dimensional point data to obtain a clustering result; and determining the point cloud belonging to the target object in the target point cloud based on the clustering result. Through the mode, the detection efficiency of the target object can be improved.

Description

Object detection method, device and storage medium
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to a method, an apparatus, and a storage medium for detecting a target object.
Background
In some scenarios, particularly in vehicle driving scenarios, in order to ensure the driving safety of the vehicle, it is necessary to detect obstacles or identification information in the road in order to be able to control the safe driving of the vehicle based on the detected information. Under the condition that a target obstacle exists in a road, the position of the obstacle and the position of the obstacle are detected rapidly, a driver can be reminded to drive in time or more time can be given, and effective obstacle avoidance is performed based on the obstacle.
Therefore, how to quickly detect the target is important for safe driving of the vehicle.
Disclosure of Invention
The technical problem that this application mainly solves is to provide a target object detection method, equipment and storage medium, can improve target object detection's efficiency.
In order to solve the technical problems, one technical scheme adopted by the application is as follows: provided is a target detection method, comprising: acquiring a target point cloud, wherein the target point cloud is acquired from a detection area by using a detection device; performing dimension reduction mapping on the target point cloud to obtain two-dimensional point data, wherein two-dimensional points contained in the two-dimensional point data are distributed in rows and columns; clustering each two-dimensional point in the two-dimensional point data to obtain a clustering result; and determining the point cloud belonging to the target object in the target point cloud based on the clustering result.
Wherein, the two-dimensional point data respectively take a horizontal view angle and a vertical view angle of the detection device as coordinate axes; and/or the two-dimensional point data is a two-dimensional image.
Performing dimension reduction mapping on the target point cloud to obtain two-dimensional point data, wherein the dimension reduction mapping comprises the following steps: for each target point in the target point cloud, determining a mapping column of the target point by utilizing a view angle of a first direction and resolution of the first direction of the detection device, and determining a mapping row of the target point by utilizing a view angle of a second direction of the detection device and resolution of the second direction, wherein one of the first direction and the second direction is a horizontal direction, the other is a vertical direction, and a crossing point of the mapping row and the mapping column of the target point is defined as a two-dimensional point; and for each two-dimensional point, obtaining a data value corresponding to the two-dimensional point in the two-dimensional point data based on the detection data of the target point corresponding to the two-dimensional point, wherein the detection data of the target point is derived from the target point cloud.
Wherein before determining the mapping column of the target point by using the viewing angle of the first direction and the resolution of the first direction of the detection device and determining the mapping row of the target point by using the viewing angle of the second direction and the resolution of the second direction of the detection device, the method further comprises: acquiring coordinates of a target point from the target point cloud, wherein the coordinates of the target point comprise a first axis coordinate, a second axis coordinate and a third axis coordinate, and the third axis is parallel to the second direction; determining a mapping column of the target point using the viewing angle of the first direction and the resolution of the first direction of the detection device, comprising: determining a mapping column of the target point by using the first axis coordinate, the second axis coordinate, the view angle of the first direction and the resolution of the first direction of the target point; determining a mapping column of the target point using the viewing angle of the second direction and the resolution of the second direction of the detection device, comprising: and determining a mapping column of the target point by using the third axis coordinate of the target point, the visual angle of the second direction and the resolution of the second direction.
Wherein determining the mapping column of the target point using the first axis coordinate, the second axis coordinate, the viewing angle of the first direction, and the resolution of the first direction of the target point comprises: subtracting the ratio between the first axis coordinate and the second axis coordinate of the target point from the maximum view angle in the first direction to obtain a first difference value, and determining the first ratio between the first difference value and the resolution in the first direction or the adjacent integer of the first ratio as a mapping column of the target point; determining a mapping row of the target point using the third axis coordinate of the target point, the viewing angle of the second direction, and the resolution of the second direction, comprising: and subtracting the ratio of the third axis coordinate of the target point to the first direction distance of the target point from the maximum view angle in the second direction to obtain a second difference value, and determining a second ratio between the second difference value and the resolution in the second direction or an adjacent integer of the second ratio as a mapping row of the target point.
Wherein the first direction is a horizontal direction, and the second direction is a vertical direction; and/or, based on the detection data of the target point corresponding to the two-dimensional point, obtaining a data value corresponding to the two-dimensional point in the two-dimensional point data, including: selecting a target point meeting the requirement of a preset distance from at least one target point corresponding to the two-dimensional point, and taking the detection data of the selected target point as a data value corresponding to the two-dimensional point.
The clustering result comprises a plurality of clusters of two-dimensional points; based on the clustering result, determining a point cloud belonging to the target object in the target point cloud, including: for each cluster, searching out a target point corresponding to each two-dimensional point in the cluster from the target point cloud, wherein the target point is used as a candidate point cloud corresponding to the cluster, and the candidate point cloud of each cluster corresponds to one object respectively; and searching candidate point clouds with characteristic factors conforming to the characteristics of the target object from the candidate point clouds corresponding to the clusters, wherein the characteristic factors are used for indicating the characteristics of the object corresponding to the candidate point clouds as the point clouds of the target object.
The characteristic factors comprise at least one of the height of an object corresponding to the candidate point cloud, the height of the object from the ground and the perpendicularity of the candidate point cloud; and/or, searching candidate point clouds with characteristic factors conforming to the characteristics of the target object from candidate point clouds corresponding to each cluster, and before the candidate point clouds are used as the point clouds of the target object, obtaining the characteristic factors by at least one of the following steps: determining a boundary frame of a corresponding object by using the candidate point cloud, and acquiring the height of the boundary frame as the height of the object corresponding to the candidate point cloud; obtaining the distance between the bottom surface of the boundary frame and the ground to obtain the height of an object corresponding to the candidate point cloud from the ground; and obtaining a point cloud covariance matrix of the candidate point cloud, carrying out feature decomposition on the point cloud covariance matrix to obtain a feature vector, and obtaining perpendicularity of the candidate point cloud based on the feature vector.
The target point cloud is a point cloud which is extracted from an original point cloud acquired by the detection device on the detection area and is suspected to belong to a target object, the target object comprises the ground, and the target point cloud comprises the ground point cloud belonging to the ground; acquiring the distance between the bottom surface of the bounding box and the ground comprises the following steps: determining ground plane parameters by utilizing a ground point cloud, wherein the ground point cloud is a point cloud belonging to the ground in the original point cloud; and calculating to obtain the distance between the bottom surface and the ground by using the coordinates of the preset points in the bottom surface and the ground plane parameters.
Wherein, before determining the ground plane parameter using the ground point cloud, the method further comprises: dividing the ground point cloud in a preset direction to obtain sub ground point clouds corresponding to a plurality of sub areas respectively; determining ground plane parameters using the ground point cloud, comprising: for each subarea, determining the ground plane parameters of the subarea based on the subarea point clouds in the subarea; calculating a distance between the bottom surface and the ground by using coordinates of preset points in the bottom surface and ground plane parameters, wherein the distance comprises the following steps: and taking the sub-region to which the preset point belongs as a reference sub-region, and calculating to obtain the distance between the bottom surface and the ground by utilizing the coordinates of the preset point and the ground plane parameters of the reference sub-region.
Wherein the target is a cone; and/or, obtaining a target point cloud, comprising: acquiring an original point cloud acquired by a detection device on a detection area; and extracting a target point cloud suspected to belong to the target object from the original point cloud.
The extracting the target point cloud suspected to belong to the target object from the original point cloud comprises the following steps: carrying out semantic segmentation on the original point cloud by utilizing a semantic segmentation model to obtain a plurality of categories of point clouds, wherein the categories comprise ground and targets; and selecting a point cloud belonging to the category of the target object from the plurality of category point clouds as a target point cloud.
In order to solve the technical problem, a further technical scheme adopted by the application is as follows: there is provided an electronic device comprising a memory and a processor coupled to each other, the memory storing program instructions; the processor is configured to execute the program instructions stored in the memory to implement the above-described method.
In order to solve the technical problem, another technical scheme adopted by the application is as follows: there is provided a computer readable storage medium storing program instructions executable to implement the above method.
According to the scheme, after the target point cloud is obtained, the target point cloud is not clustered directly, but is subjected to dimension reduction mapping to obtain two-dimensional point data distributed in rows and columns, and then each two-dimensional point in the two-dimensional point data is clustered to obtain a clustering result. Compared with a direct clustering mode for target points, the clustering mode for two-dimensional points can effectively reduce the number of points to be clustered, so that the clustering efficiency can be improved, and the detection efficiency of target objects can be improved.
Drawings
FIG. 1 is a flow chart of an embodiment of a method for detecting a target object according to the present application;
FIG. 2 is a flowchart illustrating an embodiment of the step S12 shown in FIG. 1;
FIG. 3 is a flowchart illustrating an embodiment of the step S14 shown in FIG. 1;
FIG. 4 is a schematic diagram of a frame of an embodiment of an electronic device provided herein;
fig. 5 is a schematic diagram of a framework of a computer-readable storage medium provided herein.
Detailed Description
In order to make the objects, technical solutions and effects of the present application clearer and more specific, the present application will be further described in detail below with reference to the accompanying drawings and examples.
In addition, if there is a description of "first", "second", etc. in the embodiments of the present application, the description of "first", "second", etc. is for descriptive purposes only and is not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In addition, the technical solutions of the embodiments may be combined with each other, but it is necessary to base that the technical solutions can be realized by those skilled in the art, and when the technical solutions are contradictory or cannot be realized, the combination of the technical solutions should be regarded as not exist and not within the protection scope of the present application.
Referring to fig. 1, fig. 1 is a flow chart of an embodiment of a target detection method provided in the present application. It should be noted that, if there are substantially the same results, the present embodiment is not limited to the flow sequence shown in fig. 1. As shown in fig. 1, the present embodiment includes:
s11: and acquiring a target point cloud, wherein the target point cloud is acquired from a detection area by using a detection device.
The embodiment is used for performing dimension reduction mapping on the collected target point cloud, reducing the dimension of the three-dimensional point cloud into two-dimensional points, so that a clustering result is quickly obtained by clustering a small number of points, and further, the point cloud belonging to the target object is quickly determined.
The object in this context may be any object to be detected. For example, in a vehicle driving scenario, the target may be a traffic marker on a road, such as a cone; of course, other obstacles in the road are also possible, such as other vehicles, pedestrians, etc. The detection area in this context may be the largest detection area of the detection device, or may be a region of interest selected from the largest detection area in which the target object may appear. The detection device herein may be a laser radar, a millimeter wave radar, or the like. The laser radar can accurately measure the geometric characteristics of the object, and can be preferably used for detection under the condition that the geometric characteristics of the target object are obviously different from those of other non-target objects.
In some embodiments, acquiring the target point cloud comprises: firstly, acquiring an original point cloud acquired by a detection device on a detection area; and then extracting target point clouds suspected to belong to the target object from the acquired original point clouds. The original point cloud is a point cloud corresponding to each object existing in the detection area acquired by the detection device.
In a specific embodiment, extracting a target point cloud suspected to belong to a target object from the acquired original point cloud includes: firstly, carrying out semantic segmentation on an original point cloud by adopting a semantic segmentation model to obtain a plurality of categories of point clouds, wherein the categories comprise ground and target objects; and then selecting the point cloud belonging to the target object class from the plurality of class point clouds as the target point cloud. Firstly, carrying out semantic segmentation on an original point cloud by adopting a semantic segmentation model, determining the point cloud belonging to the ground and the point cloud suspected to be a target object, and then selecting the target point cloud belonging to the class of the target object from the point cloud belonging to the ground and the point cloud suspected to be the target object. In this embodiment, the point cloud obviously not belonging to the target object can be rapidly removed through the semantic segmentation model, or the point cloud unfavorable for determining the target object can be rapidly removed, so that the point cloud belonging to the target object can be rapidly determined from the target point cloud.
In a specific embodiment, the detection device is a laser radar, in order to make the semantic segmentation model perform more accurate semantic segmentation on the original point cloud to obtain an accurate segmentation result (semantic segmentation model), coordinate information of the original point cloud and reflectivity of each point in the original point cloud can be input into the semantic segmentation model, and since laser strikes an object, different objects can be distinguished through reflectivity due to different light waves reflected by different objects, and therefore when the semantic segmentation model processes the original point cloud including reflectivity, the accurate segmentation result is easy to obtain.
In another embodiment, a random sampling consistency algorithm may be adopted to perform ground segmentation on the original point cloud to obtain ground point clouds belonging to the ground, wherein the other point clouds except the ground point clouds in the original point clouds are obstacle point clouds, and then according to the height information of the target point clouds to be detected, the point clouds meeting the height of the target point clouds are extracted from the obstacle point clouds to serve as target point clouds suspected to belong to the target. And extracting points with the height similar to that of the target object point cloud from the obstacle point cloud to serve as points in the target point cloud suspected to belong to the target object.
It should be noted that, in some embodiments, the target point cloud only includes a point cloud suspected to be a target object; in other embodiments, the target point cloud includes a ground point cloud belonging to the ground in addition to the point cloud suspected of being the target. The specific target point cloud may be determined according to actual needs, and in some scenarios, for example, in the following needs to screen the target object from the obstacle according to the distance between the obstacle and the ground, the acquired target point cloud may include the point cloud suspected to be the target object and the ground point cloud belonging to the ground.
S12: and performing dimension reduction mapping on the target point cloud to obtain two-dimensional point data, wherein two-dimensional points contained in the two-dimensional point data are distributed in rows and columns.
Before performing dimension reduction mapping on the target point cloud to obtain two-dimensional point data, determining a two-dimensional mapping space, and then mapping the target point cloud to the two-dimensional mapping space to obtain corresponding two-dimensional point data. In this embodiment, the two-dimensional points included in the two-dimensional point data are distributed in rows and columns in the two-dimensional mapping space, that is, the two-dimensional points have row and column information in the two-dimensional mapping space.
In an embodiment, the horizontal view angle and the vertical view angle of the detection device are respectively used as coordinate axes to obtain a two-dimensional mapping space, i.e. the two-dimensional point data respectively uses the horizontal view angle and the vertical view angle of the detection device as coordinate axes. Of course, in other embodiments, other coordinate axes for determining two directions for the reference object may be used to obtain the two-dimensional mapping space. For example, in a vehicle driving scene, a two-dimensional mapping space may be obtained with a vehicle as a reference and a traveling direction of the vehicle and a direction perpendicular to the traveling direction as coordinate axes.
In one embodiment, the two-dimensional point data obtained is a two-dimensional image. The target point cloud is mapped into a two-dimensional image, and it can be understood that when the target point cloud is mapped into the two-dimensional image, clustering is performed on each two-dimensional point in the two-dimensional image to obtain a corresponding clustering result.
Taking two-dimensional point data as an example of a two-dimensional image, in some embodiments, the size of the two-dimensional image may be determined according to the horizontal view angle and the vertical view angle of the detection device, further, the size of each pixel point in the two-dimensional image may be determined according to the resolution of the detection device in the horizontal direction and the resolution of the detection device in the vertical direction, after determining the size of the two-dimensional image and the size of the pixel point, for each target point, the mapping row and the mapping column of the two-dimensional point after dimension reduction (i.e., the position of the two-dimensional point in the two-dimensional image) may include a plurality of target points in the target point cloud at the position corresponding to the intersection point of any mapping row and mapping column after determining the mapping column of the two-dimensional point, in order to facilitate subsequent clustering by using fewer target points, and then, according to the detection data of the target points corresponding to the two-dimensional point, the target point corresponding to the preset requirement may be selected from among the target points, and then the corresponding detection data of the target point corresponding to the two-dimensional point may be selected as the corresponding data value of the two-dimensional point.
Specifically, referring to fig. 2, fig. 2 is a flow chart illustrating an embodiment of step S12 shown in fig. 1. It should be noted that, if there are substantially the same results, the embodiment is not limited to the flow sequence shown in fig. 2. As shown in fig. 2, the present embodiment includes:
S21: for each target point in the target point cloud, a mapping column of the target point is determined by utilizing a view angle of a first direction and a resolution of the first direction of the detection device, a mapping row of the target point is determined by utilizing a view angle of a second direction of the detection device and a resolution of the second direction, one of the first direction and the second direction is a horizontal direction, the other is a vertical direction, and an intersection point of the mapping row and the mapping column of the target point is defined as a two-dimensional point.
In summary, the present embodiment determines the mapping rows and the mapping columns of the dimension-reduced two-dimensional points according to the coordinates of the target point, the viewing angle and the corresponding resolution of the horizontal direction of the detection device, and the viewing angle and the corresponding resolution of the vertical direction of the detection device. The coordinates of the target point should also be obtained from the target point cloud before determining the mapping row and the mapping column of the target point, wherein the coordinates of the target point are three-dimensional coordinates including a first axis coordinate, a second axis coordinate and a third axis coordinate, and the third axis is parallel to the second direction. In one embodiment, the first direction of the detecting device may be set to be a horizontal direction, the second direction may be set to be a vertical direction, and the third axis coordinate of the target point may be a coordinate of the point in the vertical direction.
In this embodiment, determining a mapping column of the target point by using the view angle of the first direction and the resolution of the first direction of the detection device includes: and determining a mapping column of the target point by using the first axis coordinate, the second axis coordinate, the view angle of the first direction and the resolution of the first direction of the target point.
Specifically, the following formula can be referred to for determining the mapping column of the target point:
Col=(Fov left -P y /P x )/H ratio
where Col represents the mapping column of the target point, fov left Representing the maximum viewing angle in the first direction, P y /P x Represents the ratio between the first axis coordinate and the second axis coordinate of the target point, H ratio Representing the resolution of the first direction.
In this embodiment, the ratio between the first axis coordinate and the second axis coordinate of the target point is subtracted from the maximum view angle in the first direction to obtain a first difference value, and then the first ratio between the first difference value and the resolution in the first direction or the adjacent integer of the first ratio is determined as the mapping column of the target point.
Furthermore, determining a mapping row of the target point using the view angle of the second direction and the resolution of the second direction of the detection device, comprising: and determining the mapping row of the target point by using the third axis coordinate of the target point, the view angle of the second direction and the resolution of the second direction.
Specifically, the following formula can be referred to for determining the mapping line of the target point:
Row=(Fov up -P z /P range )/V ratio
where Row represents the mapping Row of the target point, fov up Representing the maximum viewing angle in the second direction, P z Representing the third axis coordinate of the target point, P range For the first direction distance of the target point, V ratio Resolution in the second direction.
In this embodiment, the ratio between the third axis coordinate of the target point and the first direction distance of the target point is subtracted from the maximum viewing angle in the second direction to obtain a second difference value, and then the second ratio between the second difference value and the resolution in the second direction or the adjacent integer of the second ratio is determined as the mapping line of the target point.
S22: and for each two-dimensional point, obtaining a data value corresponding to the two-dimensional point in the two-dimensional point data based on the detection data of the target point corresponding to the two-dimensional point, wherein the detection data of the target point is derived from the target point cloud.
After the mapping rows and the mapping columns of the target points are acquired, the intersection area of the same mapping row and the mapping column may include a plurality of target points, so that fewer target points are used for clustering in the following process.
Wherein, each of the plurality of target points in the crossing area has a corresponding distance value from the center position of the crossing area. In one embodiment, the target point meeting the preset distance requirement is a point with the largest distance value among the plurality of target points; in another embodiment, the target point meeting the preset distance requirement is a point with the smallest distance value among the plurality of target points; of course, in other embodiments, the target point meeting the preset distance requirement may also be a point whose distance value is closest to the average distance value of all the target points, and the specific distance requirement may be set according to actual needs. In addition, the detection data of the target point may be determined according to a specific detection device, for example, in the case where the detection device is a lidar, the detection data includes a point cloud coordinate and a point cloud reflectivity, and the data value corresponding to the two-dimensional point includes a two-dimensional point coordinate and a reflectivity.
S13: and clustering each two-dimensional point in the two-dimensional point data to obtain a clustering result.
In one embodiment, a breadth-first search algorithm may be utilized to cluster each two-dimensional point in the two-dimensional point data to obtain a clustering result; of course, other related clustering algorithms can be used to cluster each two-dimensional point in the two-dimensional point data to obtain a clustering result. The obtained clustering result comprises a plurality of clusters of two-dimensional points, wherein the two-dimensional points contained in each cluster are two-dimensional points belonging to the same object.
In other embodiments, other clustering algorithms (e.g., DBSCAN algorithm) may be used to cluster each two-dimensional point in the two-dimensional point data, so as to obtain a clustering result.
S14: and determining the point cloud belonging to the target object in the target point cloud based on the clustering result.
In an embodiment, the clustering result obtained by clustering includes a plurality of clusters of two-dimensional points, and each cluster of two-dimensional points is a point belonging to the same object. After clustering to obtain a plurality of clusters of two-dimensional points, for each cluster, searching out a target point corresponding to each two-dimensional point in the cluster from a target point cloud, taking the searched target point as a candidate point cloud corresponding to the cluster, and then searching out a candidate point cloud with characteristic factors conforming to the characteristics of the target object from the candidate point clouds corresponding to each cluster, and taking the candidate point cloud as the point cloud of the target object, so as to find out the target object with the characteristic factors conforming to the characteristics of the target object by utilizing the characteristics of the target object.
Specifically, referring to fig. 3, fig. 3 is a flow chart illustrating an embodiment of step S14 shown in fig. 1. It should be noted that, if there are substantially the same results, the embodiment is not limited to the flow sequence shown in fig. 3. As shown in fig. 3, the present embodiment includes:
s31: and for each cluster, searching out a target point corresponding to each two-dimensional point in the cluster from the target point cloud, wherein the target point cloud is used as a candidate point cloud corresponding to the cluster, and the candidate point cloud of each cluster corresponds to one object.
In this embodiment, the clustering result obtained by clustering includes a plurality of clusters of two-dimensional points, where each cluster of two-dimensional points is a point belonging to the same object. After the two-dimensional points of each cluster are obtained, searching target points corresponding to the two-dimensional points in the cluster from the target point cloud, and taking the searched target points as candidate point clouds corresponding to the cluster, so that characteristic factors of the candidate point clouds of each cluster are conveniently utilized to determine a target object. The candidate point clouds of each cluster correspond to one object respectively, for example, the candidate point clouds of the cluster a correspond to the object a, and the candidate point clouds of the cluster B correspond to the object B.
S32: and searching candidate point clouds with characteristic factors conforming to the characteristics of the target object from the candidate point clouds corresponding to the clusters, wherein the characteristic factors are used for indicating the characteristics of the object corresponding to the candidate point clouds as the point clouds of the target object.
In this embodiment, the feature factor is used to characterize the object corresponding to the candidate point cloud. The characteristic factors comprise at least one of the height of an object corresponding to the candidate point cloud, the height of the object from the ground and the perpendicularity of the candidate point cloud. The height of the object represents the height of the object, the height of the object from the ground represents the distance between the bottom surface of the object and the ground, the perpendicularity of the candidate point cloud represents the distribution condition of the candidate point cloud in different axial directions.
In an embodiment, the feature factor includes the height of the object corresponding to the candidate point cloud, and in this embodiment, according to the height information of the object, the object with the height matching the height of the target object may be selected, and then the target object may be determined from the selected object; in another embodiment, the feature factor includes a height of the object from the ground, and in this embodiment, the object whose height from the ground satisfies a height of the target object from the ground may be selected according to the height of the object from the ground, and then the target object is determined from the selected object; in still another embodiment, the feature factor includes verticality of the candidate point cloud, and in this embodiment, an object whose verticality meets the target object verticality may be selected according to the verticality of the object in the target axis direction, and then the target object may be determined from the selected object.
In a specific embodiment, the feature factors include the height of the object corresponding to the candidate point cloud, the height of the object from the ground, and the perpendicularity of the candidate point cloud, and the candidate point cloud with the feature factors conforming to the features of the target object can be sequentially screened out as the point cloud of the target object according to a preset priority sequence of the height of the object, the height of the object from the ground, and the perpendicularity of the candidate point cloud. For example, the object height from the ground is firstly utilized to carry out preliminary screening, then the object height is utilized to carry out further screening, and finally the perpendicularity of the candidate point cloud is utilized to determine the target object.
According to the scheme, after the target point cloud is obtained, the target point cloud is not clustered directly, but is subjected to dimension reduction mapping to obtain two-dimensional point data distributed in rows and columns, and then each two-dimensional point in the two-dimensional point data is clustered to obtain a clustering result. Compared with a direct clustering mode for target points, the clustering mode for two-dimensional points can effectively reduce the number of points to be clustered, so that the clustering efficiency can be improved, and the detection efficiency of target objects can be improved.
In an embodiment, in searching candidate point clouds with feature factors conforming to features of the target object from candidate point clouds corresponding to each cluster, before the candidate point clouds are used as the point clouds of the target object, the feature factors are further obtained, wherein obtaining the feature factors includes at least one of the following steps:
first, determining a boundary frame of a corresponding object by using the candidate point cloud, and acquiring the height of the boundary frame, wherein the height of the boundary frame is the height of the object corresponding to the candidate point cloud.
After the candidate point cloud is determined, a bounding box of the corresponding object and the height of the bounding box can be determined according to the three-dimensional coordinates of each point in the candidate point cloud, wherein the height of the bounding box is the height of the object corresponding to the candidate point cloud.
And secondly, obtaining the distance between the bottom surface of the boundary box and the ground to obtain the height of the object corresponding to the candidate point cloud from the ground.
In an embodiment, the target point cloud is a point cloud suspected to belong to a target object, which is extracted from an original point cloud acquired by the detection device on the detection area, wherein the target object comprises a ground, and the target point cloud comprises a ground point cloud belonging to the ground. Illustratively, the semantic segmentation model may be used to semantically segment the original point cloud to obtain a ground point cloud belonging to the ground.
After the ground point cloud belonging to the ground is obtained, the ground plane parameter is determined by utilizing the ground point cloud, and the distance between the bottom surface and the ground is calculated by utilizing the coordinates of the preset point in the bottom surface of the object boundary frame and the ground plane parameter. The preset point in the bottom surface of the object boundary frame can be any point on the bottom surface of the object boundary frame, or can be a point adjacent to the centroid coordinates of the bottom surface point, the ground plane parameter is expressed by an equation of the ground plane, and after the ground plane parameter and the preset point in the bottom surface of the object boundary frame are determined, the distance between the bottom surface of the object boundary frame and the ground can be obtained by calculating the distance between the preset point and the plane.
Optionally, after the ground point cloud belonging to the ground is obtained, the ground plane parameter can be determined by using the ground point cloud, the coordinates of the center point of the bottom surface of the object boundary frame are determined by using the barycenter coordinates of the object boundary frame and the height of the boundary frame, the center point of the bottom surface is used as the preset point of the bottom surface, and the distance between the bottom surface of the object boundary frame and the ground is obtained by calculating the distance between the preset point and the plane.
For the determination of the ground plane parameters, in one embodiment, a random sampling consistency algorithm may be utilized to filter the ground point cloud to obtain the ground plane parameters. For example, 3 ground points are firstly extracted at any time, a ground plane model ax+by+cz+d=0 is constructed, the proportion of the ground points distributed in the ground plane model is counted within a certain threshold value through the ground plane model, the iteration is repeated for many times, and the optimal ground plane model is found to be used as a ground plane parameter. Of course, other relevant fitting algorithms can be used to obtain the ground plane parameters according to the ground point cloud fitting.
In some embodiments, considering that the ground is sloped, in order to accurately obtain the distance between the object and the ground, the ground point cloud may be subjected to segmentation processing in a preset direction before the ground plane parameter is determined by using the ground point cloud, so as to obtain sub-ground point clouds corresponding to the sub-areas respectively. Wherein the preset direction is the forward direction of the vehicle.
In this embodiment, determining the ground plane parameter using the ground point cloud includes: for each sub-region, a ground plane parameter for the sub-region is determined based on the sub-ground point clouds in the sub-region. Wherein the ground plane parameters of the respective sub-areas may be determined in the manner described above for determining the ground plane parameters.
In addition, after determining a plurality of sub-areas, the sub-area of the bottom surface of the object boundary box, to which the preset point belongs, can be used as a reference sub-area, and the distance between the bottom surface and the ground can be calculated by utilizing the coordinates of the preset point and the ground plane parameters of the reference sub-area.
Thirdly, acquiring a point cloud covariance matrix of the candidate point cloud, performing feature decomposition on the point cloud covariance matrix to obtain a feature vector, and acquiring perpendicularity of the candidate point cloud based on the feature vector.
In this embodiment, after determining the candidate point cloud, calculating a point cloud covariance matrix of the candidate point cloud according to coordinates of each point in the candidate point cloud, and then performing feature decomposition on the point cloud covariance matrix to obtain a feature vector, where the obtained feature vector is a vector formed by all axial feature values, and the obtained feature vector includes feature values of the candidate point cloud in all axial directions (including a vertical axis, a horizontal axis and a vertical axis), and the magnitude of the feature values in all axial directions represents the distribution situation of the candidate point cloud in all axial directions. For example, if the characteristic value in the vertical axis is greater than the characteristic values in the two horizontal axes, it is indicated that the candidate point cloud is more distributed in the direction close to the vertical axis and less distributed in the direction close to the horizontal axis, and the object belongs to an elongated object; if the feature value in the vertical axis direction is smaller than the feature value in each horizontal axis direction, it means that the candidate point cloud is less distributed near the vertical axis direction, and more distributed in the horizontal axis direction, it means that the object belongs to a wide object.
In this embodiment, the perpendicularity of the candidate point cloud in the target axial direction may be obtained based on the feature vector, where the perpendicularity in the target axial direction is the specific gravity of the feature value in the target axial direction in the sum of the feature values in all axial directions. The target axis can be selected according to the actual situation of the target object, for example, if the target object is an elongated object, the target axis can be determined to be a vertical axis, and then the perpendicularity of the candidate point cloud in the target axis can be obtained.
In a specific embodiment, the feature decomposition is performed on the point cloud covariance matrix of the candidate point cloud to obtain a feature vector vec= (E) x ,E y ,E z ) The perpendicularity of the candidate point cloud on the vertical axis is as follows:
wherein Ver represents verticality, E z Representing characteristic values on the vertical axis, E x Characteristic values expressed on the horizontal axis, E y Representing the eigenvalues on the vertical axis.
After the feature factors are obtained in the above manner, candidate point clouds with the feature factors conforming to the features of the target object are found out from the candidate point clouds corresponding to each cluster, and the candidate point clouds are used as the point clouds of the target object.
When the feature factor includes the height of the object corresponding to the candidate point cloud, the candidate point cloud with the object height conforming to the height of the object is selected from the candidate point clouds corresponding to the objects according to the actual height of the object, and the candidate point cloud is used as the point cloud of the object, so that the object with the height not conforming to the height of the object is filtered out. Specifically, a height interval can be set according to the actual height of the target object, and objects with object heights not in the interval are filtered out, for example, objects which are obviously higher than the target object or shorter than the target object are filtered out, so that the target object and the target object point cloud are finally obtained.
In the case that the feature factor includes the height of the object from the ground, a candidate point cloud that matches the height of the target object from the ground may be selected from the candidate point clouds corresponding to each object as the point cloud of the target object according to the height of the object from the ground, so that the objects whose ground-off heights do not match the ground-off heights of the target objects are filtered out in this way. For example, the object is a cone placed on a road, the cone is closely attached to the ground, and other objects with a height distance from the ground, such as a fence on the road, can be filtered through the height from the ground.
When the feature factor includes the perpendicularity of the candidate point cloud, the candidate point cloud with the perpendicularity meeting the perpendicularity requirement of the target object can be selected from the candidate point clouds corresponding to the objects according to the perpendicularity of the target object in the target axial direction, and the candidate point cloud is used as the point cloud of the target object, so that the object with the perpendicularity not meeting the perpendicularity requirement of the target object is filtered out. Specifically, a verticality threshold may be set according to the actual verticality of the target object, and objects whose object verticality threshold does not meet the verticality threshold are filtered out, so as to finally obtain the target object and the target object point cloud.
The factors specifically included in the characteristic factors can be set according to actual detection scenes, for example, in an unmanned scene, an unmanned vehicle needs to have the capability of identifying road traffic signs so as to cope with complex traffic scenes, wherein construction sites for road maintenance and maintenance are relatively common scenes, and generally, the scenes all need to use traffic cones to enclose construction areas so as to carry out safety warning and forenotice on vehicles and pedestrians, so that the road automatic driving vehicle needs to accurately identify the signs such as the cones so as to ensure safe driving of the vehicle. In this scenario, in order to improve the safety of driving of the vehicle, it is required that the vehicle can accurately recognize the cone.
In an embodiment, after an original point cloud acquired by a detection device on a detection area is acquired, semantic segmentation is performed on the original point cloud by using a semantic segmentation model to obtain a target point cloud, wherein the target point cloud comprises point clouds suspected to belong to the cone, after the target point cloud is obtained, dimension reduction mapping is performed on the target point clouds suspected to belong to the cone to obtain two-dimensional point data, the two-dimensional point data are two-dimensional images, and then clustering is performed on each two-dimensional point in the two-dimensional images to obtain a clustering result, wherein the clustering result comprises a plurality of clusters of two-dimensional points, for each cluster, target points corresponding to each two-dimensional point in the cluster are found out from the target point cloud and are used as candidate point clouds corresponding to the cluster, and candidate point clouds of each cluster respectively correspond to one object (each object suspected to be the cone), namely the clustering result comprises the result of each object suspected to be the cone; after candidate point clouds corresponding to all clusters (all objects) are found, candidate point clouds with characteristic factors conforming to the characteristics of the cone are found out from the candidate point clouds corresponding to all clusters and used as the point clouds of the cone.
In one embodiment, the feature factors include the height of the object, the height of the object from the ground, and the perpendicularity of the candidate point cloud, wherein, although the cone contains multiple heights, the height of the cone is generally within a certain height interval, and furthermore, the cone for safety warning and forenotice is generally placed on the ground, i.e. there is no height distance between the cone and the ground; in addition, the cone is an elongated object, and the verticality of the cone in the vertical axis is greater than or equal to a preset verticality threshold.
In this embodiment, a first target object with a height within the height interval may be selected from the objects through the height interval, then a second target object with a height distance from the ground smaller than a preset threshold (here, a threshold set for taking account of calculation errors) is selected from the selected objects, and then an object with a verticality greater than or equal to the preset verticality threshold in the vertical axis direction is selected from the second target object as a cone.
Referring to fig. 4, fig. 4 is a schematic frame diagram of an embodiment of an electronic device provided in the present application. In this embodiment, the electronic device 40 includes a memory 41 and a processor 42 coupled to each other.
The memory 41 stores program instructions and the processor 42 is configured to execute the program instructions stored in the memory 41 to implement the steps of any of the method embodiments described above. In one particular implementation scenario, electronic device 40 may include, but is not limited to: the microcomputer and the server, and the electronic device 40 may also include a mobile device such as a notebook computer and a tablet computer, which is not limited herein.
In particular, the processor 42 is adapted to control itself and the memory 41 to implement the steps of any of the embodiments described above. The processor 42 may also be referred to as a CPU (Central Processing Unit ). The processor 42 may be an integrated circuit chip having signal processing capabilities. The processor 42 may also be a general purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a Field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 42 may be commonly implemented by an integrated circuit chip.
Referring to fig. 5, fig. 5 is a schematic diagram of a framework of a computer readable storage medium provided in the present application. The computer readable storage medium 50 of the present embodiment stores program instructions 51 that when executed implement the method provided by any of the above-described embodiments and any non-conflicting combination. The program instructions 51 may form a program file stored in the computer readable storage medium 50 as a software product, so that a computer device (may be a personal computer, a server, or a network device) performs all or part of the steps of the methods according to the embodiments of the present application. And the aforementioned computer-readable storage medium 50 includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, an optical disk, or other various media capable of storing program codes, or a terminal device such as a computer, a server, a mobile phone, a tablet, or the like.
According to the scheme, after the target point cloud is obtained, the target point cloud is not clustered directly, but is subjected to dimension reduction mapping to obtain two-dimensional point data distributed in rows and columns, and then each two-dimensional point in the two-dimensional point data is clustered to obtain a clustering result. Compared with a direct clustering mode for target points, the clustering mode for two-dimensional points can effectively reduce the number of points to be clustered, so that the clustering efficiency can be improved, and the detection efficiency of target objects can be improved.
The foregoing description of various embodiments is intended to highlight differences between the various embodiments, which may be the same or similar to each other by reference, and is not repeated herein for the sake of brevity.
The foregoing description is only of embodiments of the present application, and is not intended to limit the scope of the patent application, and all equivalent structures or equivalent processes using the descriptions and the contents of the present application or other related technical fields are included in the scope of the patent application.

Claims (14)

1. A method of detecting a target, the method comprising:
acquiring a target point cloud, wherein the target point cloud is acquired from a detection area by utilizing a detection device;
performing dimension reduction mapping on the target point cloud to obtain two-dimensional point data, wherein two-dimensional points contained in the two-dimensional point data are distributed in rows and columns;
clustering each two-dimensional point in the two-dimensional point data to obtain a clustering result;
and determining a point cloud belonging to the target object in the target point cloud based on the clustering result.
2. The method of claim 1, wherein the two-dimensional point data are respectively coordinate axes of a horizontal viewing angle and a vertical viewing angle of the probe device;
And/or, the two-dimensional point data is a two-dimensional image.
3. The method according to claim 1 or 2, wherein the performing dimension-reduction mapping on the target point cloud to obtain two-dimensional point data includes:
for each target point in the target point cloud, determining a mapping column of the target point by utilizing a view angle of a first direction and resolution of the first direction of the detection device, and determining a mapping row of the target point by utilizing a view angle of a second direction and resolution of the second direction of the detection device, wherein one of the first direction and the second direction is a horizontal direction, the other is a vertical direction, and an intersection point of the mapping row and the mapping column of the target point is defined as the two-dimensional point;
and for each two-dimensional point, obtaining a data value corresponding to the two-dimensional point in the two-dimensional point data based on the detection data of the target point corresponding to the two-dimensional point, wherein the detection data of the target point is derived from the target point cloud.
4. A method according to claim 3, characterized in that before said determining the mapping column of the target point using the viewing angle of the first direction and the resolution of the first direction of the detection means and determining the mapping row of the target point using the viewing angle of the second direction and the resolution of the second direction of the detection means, further comprises:
Acquiring coordinates of the target point from the target point cloud, wherein the coordinates of the target point comprise a first axis coordinate, a second axis coordinate and a third axis coordinate, and the third axis is parallel to the second direction;
the determining the mapping column of the target point by using the view angle of the first direction and the resolution of the first direction of the detecting device comprises:
determining a mapping column of the target point by using a first axis coordinate, a second axis coordinate, a viewing angle of the first direction and a resolution of the first direction of the target point;
the determining the mapping row of the target point by using the view angle of the second direction and the resolution of the second direction of the detection device comprises:
and determining a mapping row of the target point by using the third axis coordinate of the target point, the view angle of the second direction and the resolution of the second direction.
5. The method of claim 4, wherein the determining the mapping column of the target point using the first axis coordinate, the second axis coordinate, the perspective of the first direction, and the resolution of the first direction of the target point comprises:
subtracting the ratio between the first axis coordinate and the second axis coordinate of the target point from the maximum view angle of the first direction to obtain a first difference value, and determining a first ratio between the first difference value and the resolution of the first direction or an adjacent integer of the first ratio as a mapping column of the target point;
The determining the mapping row of the target point by using the third axis coordinate of the target point, the viewing angle of the second direction and the resolution of the second direction includes:
and subtracting the ratio of the third axis coordinate of the target point to the first direction distance of the target point from the maximum view angle of the second direction to obtain a second difference value, and determining a second ratio between the second difference value and the resolution of the second direction or an adjacent integer of the second ratio as a mapping row of the target point.
6. A method according to claim 3, wherein the first direction is a horizontal direction and the second direction is a vertical direction;
and/or, the obtaining, based on the detection data of the target point corresponding to the two-dimensional point, a data value corresponding to the two-dimensional point in the two-dimensional point data includes:
selecting the target point meeting the preset distance requirement from at least one target point corresponding to the two-dimensional point, and taking the detection data of the selected target point as a data value corresponding to the two-dimensional point.
7. The method of claim 1, wherein the clustering result comprises a number of clusters of the two-dimensional points; the determining, based on the clustering result, a point cloud belonging to the target object from the target point clouds includes:
For each cluster, searching out a target point corresponding to each two-dimensional point in the cluster from the target point cloud, wherein the target point is used as a candidate point cloud corresponding to the cluster, and the candidate point clouds of each cluster respectively correspond to one object;
and searching candidate point clouds with characteristic factors conforming to the characteristics of the target object from candidate point clouds corresponding to the clusters, wherein the characteristic factors are used for representing the characteristics of the object corresponding to the candidate point clouds.
8. The method of claim 7, wherein the feature factor comprises at least one of a height of an object to which the candidate point cloud corresponds, a height of the object from the ground, and a perpendicularity of the candidate point cloud;
and/or, before searching out the candidate point cloud with the characteristic factor conforming to the characteristic of the target object from the candidate point clouds corresponding to each cluster as the point cloud of the target object, the method further comprises at least one step of:
determining a boundary frame of a corresponding object by using the candidate point cloud, and acquiring the height of the boundary frame as the height of the object corresponding to the candidate point cloud;
Obtaining the distance between the bottom surface of the boundary frame and the ground to obtain the height of an object corresponding to the candidate point cloud from the ground;
and obtaining a point cloud covariance matrix of the candidate point cloud, carrying out feature decomposition on the point cloud covariance matrix to obtain a feature vector, and obtaining the perpendicularity of the candidate point cloud based on the feature vector.
9. The method according to claim 8, wherein the target point cloud is a point cloud suspected to be a target object, which is extracted from an original point cloud acquired from a detection area by the detection device, the target object includes a ground, and the target point cloud includes a ground point cloud belonging to the ground;
the obtaining the distance between the bottom surface of the bounding box and the ground comprises the following steps:
determining a ground plane parameter by utilizing a ground point cloud, wherein the ground point cloud is a point cloud belonging to the ground in the original point cloud;
and calculating to obtain the distance between the bottom surface and the ground by using the coordinates of the preset points in the bottom surface and the ground plane parameters.
10. The method of claim 9, further comprising, prior to said determining the ground plane parameters using the ground point cloud:
Dividing the ground point cloud in a preset direction to obtain sub ground point clouds corresponding to a plurality of sub areas respectively;
the determining the ground plane parameter by using the ground point cloud comprises the following steps:
for each subarea, determining a ground plane parameter of the subarea based on a subarea point cloud in the subarea;
the calculating, by using coordinates of a preset point in the bottom surface and the ground plane parameter, a distance between the bottom surface and the ground includes:
and taking the subarea to which the preset point belongs as a reference subarea, and calculating the distance between the bottom surface and the ground by utilizing the coordinates of the preset point and the ground plane parameters of the reference subarea.
11. The method of claim 1, wherein the target is a cone;
and/or, the acquiring the target point cloud includes:
acquiring an original point cloud acquired by a detection device on a detection area;
and extracting a target point cloud suspected to belong to the target object from the original point cloud.
12. The method of claim 11, wherein the extracting the target point cloud suspected of belonging to the target object from the original point cloud comprises:
Carrying out semantic segmentation on the original point cloud by utilizing a semantic segmentation model to obtain a plurality of categories of point clouds, wherein the categories comprise the ground and the target object;
and selecting a point cloud belonging to the target object class from the plurality of class point clouds as the target point cloud.
13. An electronic device comprising a memory and a processor coupled to each other,
the memory stores program instructions;
the processor is configured to execute program instructions stored in the memory to implement the method of any one of claims 1-12.
14. A computer readable storage medium storing program instructions executable by a processor to implement the method of any one of claims 1-12.
CN202311044445.5A 2023-08-17 2023-08-17 Object detection method, device and storage medium Pending CN117315306A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311044445.5A CN117315306A (en) 2023-08-17 2023-08-17 Object detection method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311044445.5A CN117315306A (en) 2023-08-17 2023-08-17 Object detection method, device and storage medium

Publications (1)

Publication Number Publication Date
CN117315306A true CN117315306A (en) 2023-12-29

Family

ID=89283741

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311044445.5A Pending CN117315306A (en) 2023-08-17 2023-08-17 Object detection method, device and storage medium

Country Status (1)

Country Link
CN (1) CN117315306A (en)

Similar Documents

Publication Publication Date Title
JP3367170B2 (en) Obstacle detection device
JP5822255B2 (en) Object identification device and program
WO2018068653A1 (en) Point cloud data processing method and apparatus, and storage medium
CN110674705B (en) Small-sized obstacle detection method and device based on multi-line laser radar
US8204278B2 (en) Image recognition method
CN105976376B (en) High-resolution SAR image target detection method based on component model
US10824881B2 (en) Device and method for object recognition of an input image for a vehicle
CN115049700A (en) Target detection method and device
CN113192091A (en) Long-distance target sensing method based on laser radar and camera fusion
Börcs et al. Fast 3-D urban object detection on streaming point clouds
WO2023207845A1 (en) Parking space detection method and apparatus, and electronic device and machine-readable storage medium
CN112528781B (en) Obstacle detection method, device, equipment and computer readable storage medium
EP2677462B1 (en) Method and apparatus for segmenting object area
JP5531643B2 (en) Passage detection method, apparatus, and program
CN112733678A (en) Ranging method, ranging device, computer equipment and storage medium
CN111881752A (en) Guardrail detection and classification method and device, electronic equipment and storage medium
CN117315306A (en) Object detection method, device and storage medium
Romdhane et al. Combined 2d/3d traffic signs recognition and distance estimation
CN115713557A (en) Method, device and equipment for detecting obstacles in traffic scene and storage medium
Grönwall et al. Spatial filtering for detection of partly occluded targets
CN113219472B (en) Ranging system and method
Hsu et al. Detecting drivable space in traffic scene understanding
JP5798078B2 (en) Vehicle external recognition device and vehicle system using the same
Prahara et al. Texton based segmentation for road defect detection from aerial imagery
Lin et al. Grid and homogeneity-based ground segmentation using light detection and ranging three-dimensional point cloud

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination