CN115457506A - Target detection method, device and storage medium - Google Patents

Target detection method, device and storage medium Download PDF

Info

Publication number
CN115457506A
CN115457506A CN202211065977.2A CN202211065977A CN115457506A CN 115457506 A CN115457506 A CN 115457506A CN 202211065977 A CN202211065977 A CN 202211065977A CN 115457506 A CN115457506 A CN 115457506A
Authority
CN
China
Prior art keywords
point cloud
cloud data
target
radar
bounding box
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211065977.2A
Other languages
Chinese (zh)
Inventor
赵杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Automotive Innovation Co Ltd
Original Assignee
China Automotive Innovation Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Automotive Innovation Co Ltd filed Critical China Automotive Innovation Co Ltd
Priority to CN202211065977.2A priority Critical patent/CN115457506A/en
Publication of CN115457506A publication Critical patent/CN115457506A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/809Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a target detection method, a target detection device and a storage medium, relates to the technical field of automatic driving, and can improve the accuracy of obstacle target perception so as to improve the safety of automatic driving. The method comprises the following steps: detecting a first object in the first point cloud data by adopting a target detection model to obtain a first boundary frame and a category which represent the first object, and first target point cloud data included by the first boundary frame; clustering the first point cloud data to obtain a second boundary box and a category which represent a second object, and second target point cloud data included by the second boundary box; if the first object and the second object are determined to be the same target object according to the category of the first object and the category of the second object and the intersection ratio of the first boundary frame and the second boundary frame, when the size of the second boundary frame is larger than that of the first boundary frame, third target point cloud data used for identifying the target object are determined from the first target point cloud data and the second target point cloud data, and the target object is identified.

Description

Target detection method, device and storage medium
Technical Field
The invention relates to the technical field of automatic driving, in particular to a target detection method, a target detection device and a storage medium.
Background
The obstacle target sensing technology is a key technology in the field of automatic driving, and is used for sensing obstacles encountered by a vehicle in the driving process of the vehicle and tracking the obstacles so as to acquire the relative position between the obstacles and the vehicle. And then according to the relative position between the obstacle and the vehicle, based on an obstacle avoidance strategy, the automatic obstacle avoidance of the vehicle is realized, so that the automatic driving of the vehicle is realized.
In the related art, obstacles are directly sensed and tracked by data collected by a laser radar mounted on a vehicle. However, there may be a case where the obstacle is missed or misdetected, so that the accuracy of the sensed obstacle is low.
Disclosure of Invention
The invention provides a target detection method, a target detection device and a storage medium, which can improve the accuracy of sensing an obstacle target so as to improve the safety of automatic driving.
In order to achieve the purpose, the invention adopts the following technical scheme:
in a first aspect, the present invention provides a target detection method, comprising:
detecting a first object included in the first point cloud data by adopting a target detection model to obtain a first boundary frame representing the position of the first object, the category of the first object and first target point cloud data included in the first boundary frame; the first point cloud data is obtained based on original point cloud data acquired by a radar on a vehicle in a current acquisition period;
clustering the first point cloud data to obtain a second boundary box representing the position of the second object, the category of the second object and second target point cloud data included by the second boundary box;
determining whether the first object and the second object are the same target object according to the category of the first object and the second object and the intersection ratio of the first boundary box and the second boundary box;
if the first object and the second object are determined to be the same target object, determining third target point cloud data for identifying the target object from the first target point cloud data and the second target point cloud data based on the size relation of the first boundary frame and the second boundary frame;
and identifying the target object according to the third target point cloud data.
By adopting the target detection method provided by the invention, the first point cloud data obtained based on the original point cloud data acquired by the radar on the vehicle in the current acquisition cycle is processed by adopting a target detection model for processing and a clustering mode respectively, so that two processing results are obtained. The two processing results are: a first bounding box, a category of the first object, and first target point cloud data located in the first bounding box, and a second bounding box, a category of the second object, and second target point cloud data located in the second bounding box. And then, performing association fusion on the two results, and determining third target point cloud data corresponding to the target object according to the first target point cloud data and the second target point cloud data if the first object and the second object are determined to be the same target object according to the intersection ratio of the first boundary box and the second boundary box and the categories of the first object and the second object, so as to obtain the target object according to the third target point cloud data. Compared with the method for identifying the target object only through one mode in the prior art, the target detection method based on the association and fusion of the processing results obtained through two different modes can avoid the false detection of the target object and enable the accuracy of the obtained target object to be higher. Especially, under the condition that the characteristics of the two objects are similar, the target object obtained by the target detection method provided by the invention has higher accuracy. In the automatic driving process, the target object can be accurately identified, and the safety of automatic driving can be improved.
In a possible implementation manner, the clustering the first point cloud data to obtain a second boundary box representing a position of the second object, a category of the second object, and second target point cloud data included in the second boundary box includes:
determining a first characteristic point belonging to a target plane in the first point cloud data, wherein the target plane is a plane where the ground is determined according to the target road information;
deleting the first feature points from the first point cloud data to obtain a plurality of second feature points;
clustering the plurality of second feature points to obtain a cluster corresponding to the second object and a category label of the second object;
and determining a second boundary box representing the position of the second object according to the clustering cluster, and second target point cloud data included by the second boundary box, and determining the category of the second object according to the category label.
In a possible implementation manner, the determining a first feature point belonging to the target plane in the first point cloud data includes:
acquiring target road information within a preset range of a vehicle;
determining a plane equation of a target plane according to the coordinates of the road characteristic points in the target road information;
dividing each feature point in the first target point cloud data into a plurality of grids;
determining the distance between each grid and a target plane according to the characteristic points in each grid and a plane equation;
if the distance is less than a first threshold, each feature point in the grid is determined to be a first feature point.
In a possible implementation manner, the determining a distance between each grid and the target plane according to the feature points in each grid and a plane equation includes:
determining the barycentric coordinates of each grid according to the coordinates of each feature point in each grid;
and determining the distance between each grid and the target plane according to each barycentric coordinate and the plane equation.
In a possible implementation manner, the determining whether the first object and the second object are the same target object according to the category of the first object and the second object and the intersection ratio of the first bounding box and the second bounding box includes:
if the categories of the first object and the second object are the same and the intersection ratio of the first boundary box and the second boundary box is larger than a second threshold value, determining that the first object and the second object are the same target object;
and if the categories of the first object and the second object are different, or the intersection ratio of the first bounding box and the second bounding box is smaller than a second threshold value, determining that the first object and the second object are different target objects.
In a possible implementation manner, the determining, from the first target point cloud data and the second target point cloud data, third target point cloud data for identifying a target object based on a size relationship between the first bounding box and the second bounding box includes:
determining the size relation between the first bounding box and the second bounding box;
if the size of the second bounding box is larger than that of the first bounding box, reducing the size of the second bounding box to the size of the first bounding box to obtain a third bounding box;
deleting point cloud data outside the third bounding box in the second target point cloud data to obtain fourth target point cloud data;
and merging the first target point cloud data and the fourth target point cloud data to obtain third target point cloud data.
In a possible implementation manner, after determining the size relationship between the first bounding box and the second bounding box, the method further includes:
if the size of the second boundary frame is smaller than that of the first boundary frame, determining the characteristic points which do not belong to the first target point cloud data in the second target point cloud data as noise points;
and deleting noise points from the second target point cloud data to obtain third target point cloud data.
In a possible implementation manner, before the detecting the first object included in the first point cloud data by using the target detection model, the method further includes:
acquiring original point cloud data of a main radar arranged on a vehicle roof and original point cloud data of a plurality of auxiliary radars arranged on a vehicle body;
respectively carrying out motion compensation on the original point cloud data of the main radar and the original point cloud data of the auxiliary radar;
splicing the original point cloud data of the main radar after motion compensation and the original point cloud data of each auxiliary radar to obtain spliced point cloud data;
and deleting the non-relevant feature points out of the preset range of the vehicle in the spliced point cloud data to obtain first point cloud data.
In a possible implementation manner, before performing motion compensation on the original point cloud data of the primary radar and the original point cloud data of the secondary radar, the target detection method further includes:
a first transformation matrix is determined for the vehicle from a vehicle coordinate system to a map coordinate system relative to a target radar, the target radar being each of the primary radar and the plurality of secondary radars.
In a possible implementation manner, the motion compensation of the raw point cloud data of the primary radar includes:
acquiring a second transformation matrix of the main radar from a map coordinate system to a main radar coordinate system and a third transformation matrix of the main radar from the main radar coordinate system to a vehicle coordinate system;
performing motion compensation on the original point cloud data of the main radar based on the first transformation matrix, the second transformation matrix and the third transformation matrix;
the motion compensation of the original point cloud data of the secondary radar includes:
acquiring a fourth transformation matrix of the auxiliary radar from a map coordinate system to a corresponding auxiliary radar coordinate system, a fifth transformation matrix of the auxiliary radar from the auxiliary radar coordinate system to a vehicle coordinate system, and a sixth transformation matrix of the auxiliary radar from the auxiliary radar coordinate system to the main radar coordinate system;
and performing motion compensation on the original point cloud data of the secondary radar based on the first transformation matrix, the fourth transformation matrix, the fifth transformation matrix and the sixth transformation matrix.
In a possible implementation manner, the aforementioned preset range is a three-dimensional spatial range, and the removing of the non-relevant feature point, which is located outside the preset range of the vehicle, in the stitched point cloud data to obtain the first point cloud data includes:
determining feature points in the spliced point cloud data, which belong to the outside of a preset plane range of the vehicle, and feature points in the spliced point cloud data, of which the numerical axis coordinates belong to the outside of a preset coordinate range, as non-relevant feature points;
and deleting the non-relevant characteristic points from the spliced point cloud data to obtain first point cloud data.
In a possible implementation manner, after determining whether the first object and the second object are the same target object, the target detection method further includes:
if the first object and the second object are determined not to be the same target object, acquiring second point cloud data, third point cloud data and fourth point cloud data, wherein the second point cloud data, the third point cloud data and the fourth point cloud data are respectively obtained based on original point cloud data acquired by the radar in three continuous acquisition periods after the current acquisition period;
clustering the second point cloud data, the third point cloud data and the fourth point cloud data respectively to obtain a fourth boundary box representing the position of the fourth object and the category of the fourth object, a fifth boundary box representing the position of the fifth object and the category of the fifth object, and a sixth boundary box representing the position of the sixth object and the category of the sixth object;
and determining whether the second object exists according to the categories of the second object, the fourth object, the fifth object and the sixth object and the intersection and combination ratio of every two bounding boxes in the second bounding box, the fourth bounding box, the fifth bounding box and the sixth bounding box.
In a second aspect, the present invention provides an object detection apparatus comprising:
the first determining unit is used for detecting a first object included in the first point cloud data by adopting a target detection model to obtain a first boundary frame representing the position of the first object, the category of the first object and the first target point cloud data included in the first boundary frame; the first point cloud data is obtained based on original point cloud data acquired by a radar on a vehicle in a current acquisition period;
the second determining unit is used for clustering the first point cloud data to obtain a second boundary frame representing the position of the second object, the category of the second object and second target point cloud data included by the second boundary frame;
the third determining unit is used for determining whether the first object and the second object are the same target object according to the categories of the first object and the second object and the intersection ratio of the first boundary box and the second boundary box;
the third determining unit is further configured to determine, if it is determined that the first object and the second object are the same target object, third target point cloud data for identifying the target object from the first target point cloud data and the second target point cloud data based on a size relationship between the first boundary box and the second boundary box;
and the third determining unit is also used for identifying the target object according to the third target point cloud data.
In a possible implementation manner, the second determining unit is specifically configured to:
determining a first characteristic point belonging to a target plane in the first point cloud data, wherein the target plane is a plane where the ground is determined according to the target road information;
deleting the first feature points from the first point cloud data to obtain a plurality of second feature points;
clustering the plurality of second feature points to obtain a cluster corresponding to the second object and a category label of the second object;
and determining a second boundary box representing the position of the second object according to the clustering cluster, and second target point cloud data included by the second boundary box, and determining the category of the second object according to the category label.
In a possible implementation manner, the second determining unit is specifically configured to:
acquiring target road information within a preset range of a vehicle;
determining a plane equation of a target plane according to the coordinates of the road characteristic points in the target road information;
dividing each characteristic point in the first target point cloud data into a plurality of grids;
determining the distance between each grid and a target plane according to the characteristic points in each grid and a plane equation;
if the distance is less than the first threshold, each feature point in the grid is determined to be the first feature point.
In a possible implementation manner, the second determining unit is specifically configured to:
determining the barycentric coordinates of each grid according to the coordinates of each feature point in each grid;
and determining the distance between each grid and the target plane according to each barycentric coordinate and the plane equation.
In a possible implementation manner, the third determining unit is specifically configured to:
if the categories of the first object and the second object are the same and the intersection ratio of the first boundary box and the second boundary box is larger than a second threshold value, determining that the first object and the second object are the same target object;
and if the categories of the first object and the second object are different or the intersection ratio of the first boundary box and the second boundary box is smaller than a second threshold value, determining that the first object and the second object are different target objects.
In a possible implementation manner, the third determining unit is specifically configured to:
determining the size relation between the first bounding box and the second bounding box;
if the size of the second bounding box is larger than that of the first bounding box, reducing the size of the second bounding box to the size of the first bounding box to obtain a third bounding box;
deleting point cloud data outside the third bounding box in the second target point cloud data to obtain fourth target point cloud data;
and merging the first target point cloud data and the fourth target point cloud data to obtain third target point cloud data.
In a possible implementation manner, the third determining unit is further configured to:
if the size of the second boundary frame is smaller than that of the first boundary frame, determining the characteristic points which do not belong to the first target point cloud data in the second target point cloud data as noise points;
and deleting noise points from the second target point cloud data to obtain third target point cloud data.
In a possible implementation manner, the object detection apparatus further includes:
the system comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring original point cloud data of a main radar arranged on a vehicle roof and original point cloud data of a plurality of auxiliary radars arranged on a vehicle body;
the fourth determining unit is used for respectively performing motion compensation on the original point cloud data of the main radar and the original point cloud data of the secondary radar; splicing the original point cloud data of the main radar after motion compensation and the original point cloud data of each auxiliary radar to obtain spliced point cloud data; and deleting the non-relevant feature points out of the preset range of the vehicle in the spliced point cloud data to obtain first point cloud data.
In a possible implementation manner, the fourth determining unit is further configured to determine a first transformation matrix from the vehicle coordinate system to the map coordinate system with respect to a target radar, where the target radar is each of the primary radar and the plurality of secondary radars.
In a possible implementation manner, the fourth determining unit is specifically configured to:
acquiring a second transformation matrix of the main radar from a map coordinate system to a main radar coordinate system and a third transformation matrix of the main radar from the main radar coordinate system to a vehicle coordinate system; performing motion compensation on the original point cloud data of the main radar based on the first transformation matrix, the second transformation matrix and the third transformation matrix;
acquiring a fourth transformation matrix of the auxiliary radar from a map coordinate system to a corresponding auxiliary radar coordinate system, a fifth transformation matrix of the auxiliary radar from the auxiliary radar coordinate system to a vehicle coordinate system, and a sixth transformation matrix of the auxiliary radar from the auxiliary radar coordinate system to a main radar coordinate system; and performing motion compensation on the original point cloud data of the secondary radar based on the first transformation matrix, the fourth transformation matrix, the fifth transformation matrix and the sixth transformation matrix.
In a possible implementation manner, the preset range is a three-dimensional space range. The fourth determining unit is specifically configured to:
determining feature points in the spliced point cloud data, which belong to the outside of a preset plane range of the vehicle, and feature points in the spliced point cloud data, of which the numerical axis coordinates belong to the outside of a preset coordinate range, as non-relevant feature points; and deleting the non-relevant characteristic points from the spliced point cloud data to obtain first point cloud data.
In a possible implementation manner, the third determining unit is further configured to, if it is determined that the first object and the second object are not the same target object, obtain second point cloud data, third point cloud data, and fourth point cloud data, where the second point cloud data, the third point cloud data, and the fourth point cloud data are obtained based on original point cloud data acquired by the radar in three consecutive acquisition periods after the current acquisition period, respectively;
clustering the second point cloud data, the third point cloud data and the fourth point cloud data respectively to obtain a fourth boundary frame representing the position of a fourth object and the category of the fourth object, a fifth boundary frame representing the position of a fifth object and the category of the fifth object, and a sixth boundary frame representing the position of a sixth object and the category of the sixth object;
and determining whether the second object exists according to the categories of the second object, the fourth object, the fifth object and the sixth object and the intersection and combination ratio of every two bounding boxes in the second bounding box, the fourth bounding box, the fifth bounding box and the sixth bounding box.
In a third aspect, the present invention provides an object detection apparatus comprising: a processor and a memory. The memory is for storing computer program code, the computer program code including computer instructions. When the processor executes the computer instructions, the object detection apparatus performs the object detection method as described in the first aspect and any one of its possible implementations.
In a fourth aspect, the present invention provides a computer-readable storage medium having stored thereon computer instructions which, when run on an object detection apparatus, cause the object detection apparatus to perform an object detection method as defined in the first aspect or any one of its possible implementations.
Drawings
Fig. 1 is a schematic structural diagram of an object detection apparatus according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a target detection method according to an embodiment of the present invention;
FIG. 3 is a second schematic flowchart of a target detection method according to an embodiment of the present invention;
fig. 4 is a third schematic flowchart of a target detection method according to an embodiment of the present invention;
FIG. 5 is a fourth flowchart illustrating a target detection method according to an embodiment of the present invention;
fig. 6 is a second schematic structural diagram of a target detection apparatus according to an embodiment of the present invention;
fig. 7 is a third schematic structural diagram of a target detection apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the embodiments of the present disclosure, "a plurality" means two or more unless otherwise specified. Additionally, the use of "based on" or "according to" means open and inclusive, as a process, step, calculation, or other action that is "based on" or "according to" one or more stated conditions or values may in practice be based on additional conditions or exceeding the stated values.
In order to improve the accuracy of sensing an obstacle target and improve the safety of automatic driving, the embodiment of the invention provides a target detection method, a target detection device and a storage medium. And respectively processing the first point cloud data obtained based on the original point cloud data acquired by the radar on the vehicle in the current acquisition cycle by adopting a target detection model and clustering to obtain two results. And then, performing correlation fusion on the two results to obtain a target object corresponding to the first point cloud data. Compared with the method for identifying the target object only through one mode in the prior art, the target detection method based on the correlation fusion of the results obtained through two different modes can avoid the false detection of the target object and enable the accuracy of the obtained target object to be higher. In the automatic driving process, the target object can be accurately identified, and the safety of automatic driving can be improved.
The target detection method provided by the embodiment of the invention can be suitable for vehicles, wherein one main radar is arranged on the roof of the vehicle, a plurality of auxiliary radars are arranged on the periphery of the vehicle body, and the vehicle-mounted terminal and the positioning module are arranged on the periphery of the vehicle body. The main radar, each auxiliary radar and the positioning module are respectively communicated in a wired communication or wireless communication mode.
The main radar and each auxiliary radar are used as environment sensing equipment in the automatic driving system, and the original point cloud data are collected in each scanning period and sent to the vehicle-mounted terminal.
And the positioning module is used for providing the target road information in the preset range of the vehicle and the pose information of the vehicle for the vehicle-mounted terminal.
Fig. 1 is a schematic structural diagram of an object detection device, and as shown in fig. 1, the object detection device may include: a processor 11, a memory 12, a communication interface 13, and a bus 14. The processor 11, the memory 12 and the communication interface 13 may be connected by a communication bus 14.
The processor 11 is a control center of the object detection device, and may be one processor 11 or a collective term for a plurality of processing elements. For example, the processor 11 may be a general-purpose Central Processing Unit (CPU), or may be another general-purpose processor 11. Wherein the general purpose processor 11 may be a microprocessor 11 or any conventional processor 11 etc.
For one embodiment, processor 11 may include one or more CPUs, such as CPU0 and CPU1 shown in FIG. 1.
The memory 12 may be, but is not limited to, a read-only memory 12 (ROM) or other type of static storage device that may store static information and instructions, a random access memory 12 (RAM) or other type of dynamic storage device that may store information and instructions, an electrically erasable programmable read-only memory 12 (EEPROM), a magnetic disk storage medium or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
In a possible implementation, the memory 12 may be present separately from the processor 11, and the memory 12 may be connected to the processor 11 via a bus 14 for storing instructions or program code. The processor 11, when calling and executing instructions or program code stored in the memory 12, is able to implement the object detection method provided by the following embodiments of the present invention.
In another possible implementation, the memory 12 may also be integrated with the processor 11.
The communication interface 13 is configured to connect the target detection apparatus with other devices through a communication network, where the communication network may be an ethernet, a Radio Access Network (RAN), a Wireless Local Area Network (WLAN), or the like. The communication interface 13 may comprise a receiving unit for receiving data and a transmitting unit for transmitting data.
The bus 14 may be an Industry Standard Architecture (ISA) bus 14, a Peripheral Component Interconnect (PCI) bus 14, an Extended ISA (EISA) bus 14, or the like. The bus 14 may be divided into an address bus 14, a data bus 14, a control bus 14, etc. For ease of illustration, only one thick line is shown in FIG. 1, but this is not intended to indicate that there is only one bus 14 or one type of bus 14.
It is noted that the configuration shown in fig. 1 does not constitute a limitation of the object detection apparatus, and the object detection apparatus may include more or less components than those shown in fig. 1, or combine some components, or a different arrangement of components, in addition to the components shown in fig. 1.
The execution subject of the target detection method provided by the embodiment of the invention is a target detection device. The object detection device may be a vehicle, an on-board terminal in the vehicle, or a control module for detecting an object in the vehicle. The embodiment of the invention takes a vehicle to execute a target detection method as an example, and the target detection method provided by the invention is explained.
The following describes a target detection method provided by an embodiment of the present invention with reference to the drawings.
As shown in fig. 2, the target detection method provided by the embodiment of the present invention includes the following steps 201 to 205.
201. And detecting a first object included in the first point cloud data by adopting a target detection model to obtain a first boundary frame representing the position of the first object, the category of the first object and first target point cloud data included in the first boundary frame.
The first point cloud data is obtained based on original point cloud data acquired by a radar on the vehicle in a current acquisition period. The first object may be a pedestrian, a barricade or other obstacle such as a vehicle around the vehicle. The first bounding box may be a bounding box of a three-dimensional volume.
The target detection model adopted in the embodiment of the present invention is, for example, a pointpilar model, but is not limited thereto.
202. And clustering the first point cloud data to obtain a second boundary box representing the position of the second object, the category of the second object and second target point cloud data included by the second boundary box.
The second object may also be a pedestrian, a roadblock or other obstacle in the vicinity of the vehicle. The second bounding box is also a bounding box of the three-dimensional volume.
203. And determining whether the first object and the second object are the same target object according to the category of the first object and the second object and the intersection ratio of the first boundary box and the second boundary box.
And comprehensively judging the processing result of the first point cloud data by the model detection method and the processing result of the first point cloud data by the clustering method to determine whether the first object and the second object are the same target object, so that false detection can be avoided, and the accuracy of the target detection result is ensured.
204. And if the first object and the second object are determined to be the same target object, determining third target point cloud data for identifying the target object from the first target point cloud data and the second target point cloud data based on the size relationship of the first boundary box and the second boundary box.
In the embodiment of the present invention, the second bounding box and the second target point cloud data are obtained by clustering the first point cloud data, and in this process, over-segmentation or under-segmentation may be performed on the first point cloud data. This results in a lower accuracy of the second target point cloud data and the second bounding box. Therefore, the accuracy of the result obtained by detection using the target detection model is high. Based on this, if it is determined that the first object and the second object are the same target object, but the size of the second bounding box obtained through the clustering process is different from the size of the first bounding box obtained by using the target detection model, it is necessary to re-determine the third target point cloud data according to the first target point cloud data in the first bounding box and the second target point cloud data in the second bounding box. Based on this, false detection can be prevented, so that the accuracy of the target object is higher.
Optionally, determining third target point cloud data for identifying the target object from the first target point cloud data and the second target point cloud data based on the size relationship between the first bounding box and the second bounding box may include the following steps: and determining the size relation of the first bounding box and the second bounding box. If the size of the second bounding box is larger than that of the first bounding box, reducing the size of the second bounding box to the size of the first bounding box to obtain a third bounding box; deleting point cloud data outside the third bounding box in the second target point cloud data to obtain fourth target point cloud data; and merging the first target point cloud data and the fourth target point cloud data to obtain third target point cloud data.
Optionally, if the size of the second bounding box is smaller than that of the first bounding box, the feature point, which does not belong to the first target point cloud data in the second target point cloud data, is determined as the noise point, based on the first target point cloud data in the first bounding box. And then, deleting the noise point from the second target point cloud data to obtain third target point cloud data.
205. And identifying the target object according to the third target point cloud data.
After the third target point cloud data is determined, the target object may be identified according to the third target point cloud data by using a conventional technical means, and this process is not described in detail in the embodiment of the present invention.
By adopting the target detection method provided by the embodiment of the invention, the first point cloud data obtained based on the original point cloud data acquired by the radar on the vehicle in the current acquisition cycle is processed by adopting the target detection model for processing and clustering respectively, so that two processing results are obtained. The two processing results are respectively: a first bounding box, a category of the first object, and first target point cloud data located in the first bounding box, and a second bounding box, a category of the second object, and second target point cloud data located in the second bounding box. And then, performing association fusion on the two results, and determining third target point cloud data corresponding to the target object according to the first target point cloud data and the second target point cloud data if the first object and the second object are determined to be the same target object according to the intersection ratio of the first boundary box and the second boundary box and the categories of the first object and the second object, so as to obtain the target object according to the third target point cloud data. Compared with the method for identifying the target object only through one mode in the prior art, the target detection method based on the association and fusion of the processing results obtained through two different modes can avoid the false detection of the target object and enable the accuracy of the obtained target object to be higher. Particularly, under the condition that the characteristics of the two objects are similar, the target object obtained by the target detection method provided by the embodiment of the invention has higher accuracy. In the automatic driving process, the target object can be accurately identified, and the safety of automatic driving can be improved.
Referring to fig. 2, as shown in fig. 3, the step 202 may include the following steps 301 to 304.
301. And determining a first characteristic point belonging to a target plane in the first point cloud data, wherein the target plane is a plane where the ground is determined according to the target road information.
Optionally, determining the first feature point may include the following steps: first, target road information within a preset range of the vehicle is acquired. Secondly, determining a plane equation of the target plane according to the coordinates of the road characteristic points in the target road information. Thirdly, dividing each characteristic point in the first target point halo data into a plurality of grids. Then, according to the feature points in each grid and a plane equation, the distance between each grid and the target plane is determined. Finally, if the distance is less than the first threshold, each feature point in the grid is determined to be the first feature point.
For example, the target road information may include lane line information located on the ground, and road edge information. In the process of determining the target plane equation, fitting according to the coordinates of the road characteristic points in the target road information to obtain the plane equation of the target plane.
For example, determining the distance of each mesh from the target plane may include the steps of: first, the barycentric coordinates of each mesh are determined from the coordinates of the respective feature points in each mesh. Then, the distance of each mesh from the target plane is determined based on each barycentric coordinate and the plane equation.
The distance between the barycentric coordinates of each grid and the target plane is adopted to determine whether each feature point in each grid is a point on the target plane, so that the calculation amount can be reduced, the calculation efficiency is improved, and all points on the target plane can be deleted as far as possible, so that the reserved second feature points are all the feature points of the non-target plane.
302. And deleting the first characteristic points from the first point cloud data to obtain a plurality of second characteristic points.
Some feature points in the first point cloud data are feature points located on the ground, most of the point cloud data formed by the feature points located on the ground are in a texture shape, and if the first point cloud data are directly clustered, the accuracy of a clustering result is affected. In order to improve the accuracy of the clustering result, before the clustering process, feature points on a target plane where the ground is located in the first point cloud data need to be deleted.
303. And clustering the plurality of second characteristic points to obtain a cluster corresponding to the second object and a category label of the second object.
In the process of determining the first feature point, after the first point cloud data is divided into a plurality of grids, the grids with the distance to the target plane smaller than the first threshold value are deleted. That is, the plurality of second feature points in the first point cloud data are respectively in different grids. At this time, it is necessary to determine the adjacent grids according to the distances between the grids. And then, merging the second characteristic points in the adjacent grids, and removing outliers, thereby obtaining a cluster and a category label corresponding to the cluster. That is, the cluster corresponding to the second object and the category label of the second object are obtained.
304. And determining a second boundary box representing the position of the second object according to the clustering, second target point cloud data included by the second boundary box, and determining the category of the second object according to the category label.
In conjunction with fig. 3, as shown in fig. 4, the step 203 may include the following step 401 or step 402.
401. And if the categories of the first object and the second object are the same and the intersection ratio of the first boundary box and the second boundary box is greater than a second threshold value, determining that the first object and the second object are the same target object.
If the category of the first object and the category of the second object are different, it indicates that the first object and the second object must not be the same target object. If the category of the first object is the same as that of the second object, it is further required to determine whether the first object and the second object are the same target object according to the intersection and combination ratio of the first bounding box and the second bounding box. The larger the intersection ratio of the first bounding box and the second bounding box is, the higher the overlapping degree of the first bounding box and the second bounding box is, thereby indicating that the probability that the first object and the second object are the same target object is higher.
402. And if the categories of the first object and the second object are different or the intersection ratio of the first bounding box and the second bounding box is smaller than a second threshold value, determining that the first object and the second object are different target objects.
With reference to fig. 4, as shown in fig. 5, the target detection method provided in the embodiment of the present invention may further include the following steps 501 to 504.
501. The method comprises the steps of obtaining original point cloud data of a main radar arranged on a vehicle roof and original point cloud data of a plurality of auxiliary radars arranged on a vehicle body.
If only one main radar is provided on the vehicle, there are many blind spots. Therefore, in the embodiment of the invention, besides the main radar arranged on the roof of the vehicle, a plurality of blind-repairing radars, namely the auxiliary radars, are arranged on the periphery of the vehicle body. During the running process of the vehicle, each radar acquires original point cloud data in an acquisition period.
Illustratively, the embodiment of the invention can comprise 4 sub-radars which are respectively arranged on the front side, the rear side, the left side and the right side of the vehicle body.
Optionally, the vehicle is moving during one acquisition cycle, and thus the original point cloud data is distorted. In order to eliminate distortion, motion compensation needs to be performed on the original point cloud data of the primary radar and the original point cloud data of the secondary radar. Before motion compensation is performed on the original point cloud data of the primary radar and the original point cloud data of the secondary radar, a first transformation matrix from a vehicle coordinate system to a map coordinate system of the vehicle relative to a target radar is required to be determined, wherein the target radar is each radar in the primary radar and the plurality of secondary radars.
Illustratively, taking a target radar as a main radar as an example, how to determine the first transformation matrix is described. Firstly, acquiring first attitude information of a vehicle at an initial moment from a positioning module of the vehicle according to the initial moment in original point cloud data of a main radar; and acquiring second attitude information of the vehicle at the end time from a positioning module of the vehicle according to the end time in the original point cloud data of the main radar. And then, at the target time, determining the target pose information of the vehicle at the target time according to the starting time, the ending time, the first pose information and the second pose information. And obtaining a first transformation matrix according to the target pose information.
For example, the first transformation matrix may be determined according to the following formula (1) and formula (2).
Figure BDA0003827553210000141
Figure BDA0003827553210000142
Wherein, pose time Representing target pose information; time represents a target time; begin _ time represents the start time; end _ time represents the end time; pose begin_time Representing first pose information; pose end_time Representing second posture information;
Figure BDA0003827553210000143
representing a first transformation matrix.
502. And respectively carrying out motion compensation on the original point cloud data of the main radar and the original point cloud data of the auxiliary radar.
Optionally, the motion compensation of the raw point cloud data of the main radar may include the following steps: first, a second transformation matrix of the main radar from a map coordinate system to a main radar coordinate system and a third transformation matrix of the main radar from the main radar coordinate system to a vehicle coordinate system are obtained. And then, motion compensation is carried out on the original point cloud data of the main radar based on the first transformation matrix, the second transformation matrix and the third transformation matrix of the main radar.
For example, the raw point cloud data of the primary radar may be motion compensated according to the following equation (3).
Figure BDA0003827553210000144
Wherein X main1 Representing point cloud data obtained by performing motion compensation on original point cloud data of a main radar;
Figure BDA0003827553210000145
representing a second transformation matrix;
Figure BDA0003827553210000146
representing a third transformation matrix; x lidar1 Raw point cloud data representing a primary radar.
Optionally, performing motion compensation on the original point cloud data of the secondary radar may include the following steps: firstly, a fourth transformation matrix of the secondary radar from a map coordinate system to a corresponding secondary radar coordinate system, a fifth transformation matrix of the secondary radar from the secondary radar coordinate system to a vehicle coordinate system and a sixth transformation matrix of the secondary radar from the secondary radar coordinate system to a main radar coordinate system are obtained. And then, performing motion compensation on the original point cloud data of the secondary radar based on the first transformation matrix, the fourth transformation matrix, the fifth transformation matrix and the sixth transformation matrix of the secondary radar.
For example, the raw point cloud data of the secondary radar may be motion compensated according to the following formula (4).
Figure BDA0003827553210000147
Wherein X main2 Representing point cloud data obtained by performing motion compensation on original point cloud data of the secondary radar;
Figure BDA0003827553210000151
representing a fourth transformation matrix;
Figure BDA0003827553210000152
representing a fifth transformation matrix;
Figure BDA0003827553210000153
representing a sixth transformation matrix; x lidar2 Raw point cloud data representing secondary radar.
It should be noted that the second transformation matrix to the sixth transformation matrix are all directly accessible from the corresponding radar.
503. And splicing the original point cloud data of the main radar after motion compensation and the original point cloud data of each auxiliary radar to obtain spliced point cloud data.
For example, taking 4 secondary radars installed on a vehicle as an example, the point cloud data of each radar after motion compensation can be added according to the following formula (5), so as to obtain the stitched point cloud data.
X main =X main1 +X main2 +X main3 +X main4 +X main5 (5)
Wherein, X main Representing stitched point cloud data, X main3 、X main4 、X main5 Respectively representing the point cloud data after motion compensation of other three secondary radars in the 4 secondary radars.
504. And deleting the non-relevant feature points out of the preset range of the vehicle in the spliced point cloud data to obtain first point cloud data.
In order to further improve the accuracy of the target detection result, it is also necessary to delete the non-relevant feature points in the stitched point cloud data.
Optionally, the preset range is a three-dimensional space range. Therefore, the non-relevant feature points may include feature points in the stitched point cloud data that belong outside a preset plane range of the vehicle and feature points in the stitched point cloud data that belong outside a preset coordinate range in vertical axis coordinates. And deleting the non-relevant characteristic points from the spliced point cloud data to obtain first point cloud data.
The preset plane range may be a plane range formed by taking the vehicle as a center and keeping a certain distance from the front, the rear, the left, and the right of the vehicle. And deleting the non-relevant characteristic points outside the preset plane range of the vehicle, and only keeping the characteristic points in the preset plane range, so that only the object in the preset plane range can be detected. The radar may generate specular reflections, causing noise in the point cloud data located in the sky and in the subsurface. Therefore, it is also necessary to delete the point cloud data of the sky and the ground below. When the point cloud data of the sky and the part below the ground are determined, judgment can be carried out according to the number axis coordinates of the characteristic points in the spliced point cloud data. After the non-relevant feature points are deleted from the spliced point cloud data, the obtained first point cloud data can reflect the features and the sizes of the objects most, and therefore the accuracy of the detection result can be guaranteed.
For example, the determining whether the feature point belongs to the preset plane range of the vehicle may include: first, the vehicle may load a high-precision map, and acquire target road information (including lane line information and road edge information) within a preset plane range of the vehicle from the high-precision map. And then, comparing each characteristic point in the spliced point cloud data with the road characteristic point in the target road information, and judging whether each characteristic point in the spliced point cloud data is outside the preset plane range. And if one feature point in the spliced point cloud data is positioned outside the range of the preset plane, determining the feature point as a non-relevant feature point.
Optionally, in the target detection method provided in the embodiment of the present invention, after step 203, if it is determined that the first object and the second object are not the same target object, it indicates that there may be a missed detection situation at this time, that is, it is required to determine whether the second object exists.
Illustratively, confirming whether the second object exists may include the following steps: first, second point cloud data, third point cloud data and fourth point cloud data are obtained. The second point cloud data, the third point cloud data and the fourth point cloud data are obtained respectively based on original point cloud data acquired by the radar in three continuous acquisition periods after the current acquisition period. Then, clustering processing is respectively carried out on the second point cloud data, the third point cloud data and the fourth point cloud data, and a fourth boundary frame and a fourth object category which represent the position of the fourth object, a fifth boundary frame and a fifth object category which represent the position of the fifth object, and a sixth boundary frame and a sixth object category which represent the position of the sixth object are obtained. And finally, determining whether the second object exists according to the categories of the second object, the fourth object, the fifth object and the sixth object and the intersection ratio of every two bounding boxes in the second bounding box, the fourth bounding box, the fifth bounding box and the sixth bounding box.
If the first object and the second object are not the same target object, it can indicate that there may be a missing detection condition in the result obtained by using the target detection model. In this case, the first object obtained by using the object detection model is retained, and the object can be directly subjected to object tracking. For the second object obtained through the clustering process, whether the second object exists needs to be determined again based on the second point cloud data, the third point cloud data and the fourth point cloud data acquired by the radar in three consecutive acquisition periods after the current acquisition period. Specifically, the second point cloud data, the third point cloud data and the fourth point cloud data are respectively subjected to clustering processing to obtain three processing results. If the category of the three processing results is the same as that of the second object, and the intersection ratio of any two second bounding boxes in the second bounding boxes of the three processing results and the second bounding box of the second object is larger than the second threshold, it is indicated that the second object exists, that is, there is a missing detection situation. Then target tracking may also be performed on the second object. If any one of the three processing results is different from the category of the second object, or the intersection ratio of any two second bounding boxes in the second bounding boxes of the three processing results and the second bounding box of the second object is smaller than the second threshold, it indicates that the second object does not exist, that is, there is no missing detection.
The scheme provided by the embodiment of the invention is mainly introduced from the perspective of equipment. It will be appreciated that the apparatus, in order to carry out the above-described functions, comprises corresponding hardware structures and/or software modules for performing the respective functions. Those of skill in the art will readily appreciate that the invention can be implemented in hardware, or a combination of hardware and computer software, in connection with the exemplary algorithm steps described in connection with the embodiments disclosed herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
Fig. 6 shows a second schematic structural diagram of the object detection apparatus 600 according to the above embodiment, and as shown in fig. 6, the object detection apparatus 600 may include: a first determination unit 601, a second determination unit 602, and a third determination unit 603.
The first determining unit 601 is configured to detect a first object included in the first point cloud data by using a target detection model, and obtain a first boundary frame indicating a position where the first object is located, a category of the first object, and first target point cloud data included in the first boundary frame; the first point cloud data is obtained based on original point cloud data acquired by a radar on the vehicle in a current acquisition period. The second determining unit 602 is configured to perform clustering processing on the first point cloud data to obtain a second boundary box indicating a position where the second object is located, a category of the second object, and second target point cloud data included in the second boundary box. A third determining unit 603, configured to determine whether the first object and the second object are the same target object according to the categories of the first object and the second object and the intersection ratio of the first bounding box and the second bounding box. The third determining unit 603 is further configured to determine, if it is determined that the first object and the second object are the same target object, third target point cloud data for identifying the target object from the first target point cloud data and the second target point cloud data based on a size relationship between the first boundary box and the second boundary box. The third determining unit 603 is further configured to identify the target object according to the third target point cloud data.
Optionally, the second determining unit 602 is specifically configured to:
determining a first characteristic point belonging to a target plane in the first point cloud data, wherein the target plane is a plane where the ground is determined according to the target road information; deleting the first feature points from the first point cloud data to obtain a plurality of second feature points; clustering the plurality of second feature points to obtain a cluster corresponding to the second object and a category label of the second object; and determining a second boundary box representing the position of the second object according to the clustering cluster, and second target point cloud data included by the second boundary box, and determining the category of the second object according to the category label.
Optionally, the second determining unit 602 is specifically configured to:
acquiring target road information within a preset range of a vehicle; determining a plane equation of a target plane according to the coordinates of the road characteristic points in the target road information; dividing each feature point in the first target point cloud data into a plurality of grids; determining the distance between each grid and a target plane according to the characteristic points in each grid and a plane equation; if the distance is less than the first threshold, each feature point in the grid is determined to be the first feature point.
Optionally, the second determining unit 602 is specifically configured to:
determining the barycentric coordinate of each grid according to the coordinates of each feature point in each grid; and determining the distance between each grid and the target plane according to each barycentric coordinate and the plane equation.
Optionally, the third determining unit 603 is specifically configured to:
if the categories of the first object and the second object are the same and the intersection ratio of the first boundary box and the second boundary box is larger than a second threshold value, determining that the first object and the second object are the same target object; and if the categories of the first object and the second object are different, or the intersection ratio of the first bounding box and the second bounding box is smaller than a second threshold value, determining that the first object and the second object are different target objects.
Optionally, the third determining unit 603 is specifically configured to:
determining the size relation of the first bounding box and the second bounding box;
if the size of the second bounding box is larger than that of the first bounding box, reducing the size of the second bounding box to the size of the first bounding box to obtain a third bounding box; deleting point cloud data outside the third bounding box in the second target point cloud data to obtain fourth target point cloud data; and merging the first target point cloud data and the fourth target point cloud data to obtain third target point cloud data.
Optionally, the third determining unit 603 is further configured to:
if the size of the second boundary frame is smaller than that of the first boundary frame, determining the characteristic points which do not belong to the first target point cloud data in the second target point cloud data as noise points; and deleting noise points from the second target point cloud data to obtain third target point cloud data.
Fig. 7 shows a third schematic structural diagram of the object detection apparatus 600 according to the above embodiment, and as shown in fig. 7, the object detection apparatus 600 may further include: an acquisition unit 701 and a fourth determination unit 702.
The acquisition unit 701 is used for acquiring original point cloud data of a main radar arranged on a vehicle roof and original point cloud data of a plurality of auxiliary radars arranged on a vehicle body. A fourth determining unit 702, configured to perform motion compensation on the original point cloud data of the primary radar and the original point cloud data of the secondary radar respectively; splicing the original point cloud data of the main radar after motion compensation and the original point cloud data of each auxiliary radar to obtain spliced point cloud data; and deleting the non-relevant feature points out of the preset range of the vehicle in the spliced point cloud data to obtain first point cloud data.
Optionally, the fourth determining unit 702 is further configured to determine a first transformation matrix from the vehicle coordinate system to the map coordinate system with respect to a target radar, where the target radar is each of the primary radar and the plurality of secondary radars.
Optionally, the fourth determining unit 702 is specifically configured to:
acquiring a second transformation matrix of the main radar from a map coordinate system to a main radar coordinate system and a third transformation matrix of the main radar from the main radar coordinate system to a vehicle coordinate system; performing motion compensation on the original point cloud data of the main radar based on the first transformation matrix, the second transformation matrix and the third transformation matrix; acquiring a fourth transformation matrix of the auxiliary radar from a map coordinate system to a corresponding auxiliary radar coordinate system, a fifth transformation matrix of the auxiliary radar from the auxiliary radar coordinate system to a vehicle coordinate system, and a sixth transformation matrix of the auxiliary radar from the auxiliary radar coordinate system to a main radar coordinate system; and performing motion compensation on the original point cloud data of the secondary radar based on the first transformation matrix, the fourth transformation matrix, the fifth transformation matrix and the sixth transformation matrix.
Optionally, the preset range is a three-dimensional space range. The fourth determining unit 702 is specifically configured to:
determining feature points in the spliced point cloud data, which belong to the outside of a preset plane range of the vehicle, and feature points in the spliced point cloud data, of which the numerical axis coordinates belong to the outside of a preset coordinate range, as non-relevant feature points; and deleting the non-relevant characteristic points from the spliced point cloud data to obtain first point cloud data.
Optionally, the third determining unit 603 is further configured to, if it is determined that the first object and the second object are not the same target object, obtain second point cloud data, third point cloud data, and fourth point cloud data, where the second point cloud data, the third point cloud data, and the fourth point cloud data are obtained based on original point cloud data acquired by the radar in three consecutive acquisition periods after the current acquisition period, respectively; clustering the second point cloud data, the third point cloud data and the fourth point cloud data respectively to obtain a fourth boundary box representing the position of the fourth object and the category of the fourth object, a fifth boundary box representing the position of the fifth object and the category of the fifth object, and a sixth boundary box representing the position of the sixth object and the category of the sixth object; and determining whether the second object exists according to the categories of the second object, the fourth object, the fifth object and the sixth object and the intersection ratio of every two bounding boxes in the second bounding box, the fourth bounding box, the fifth bounding box and the sixth bounding box.
Of course, the object detection apparatus 600 provided by the embodiment of the present invention includes, but is not limited to, the above modules.
In actual implementation, the first determining unit 601, the second determining unit 602, the third determining unit 603, the obtaining unit 701, and the fourth determining unit 702 may be implemented by the processor 11 shown in fig. 1 calling the program code in the memory 12. For the specific implementation process, reference may be made to the description of the target detection method portion shown in fig. 2 to fig. 5, which is not described herein again.
Another embodiment of the present invention further provides a computer-readable storage medium, in which computer instructions are stored, and when the computer instructions are executed on an object detection apparatus, the object detection apparatus is caused to execute each step executed by the object detection apparatus in the method flow shown in the above method embodiment.
Another embodiment of the present invention further provides a chip system, which is applied to the target detection apparatus. The system-on-chip includes one or more interface circuits, and one or more processors 11. The interface circuit and the processor 11 are interconnected by wires. The interface circuit is adapted to receive signals from the memory 12 of the object detection means and to send said signals to the processor 11, said signals comprising computer instructions stored in said memory 12. When the processor 11 executes the computer instructions, the object detection apparatus performs the steps performed by the object detection apparatus in the method flow shown in the above-described method embodiment.
In another embodiment of the present invention, a computer program product is also provided, which includes instructions that, when executed on an object detection apparatus, cause the object detection apparatus to perform the steps performed by the object detection apparatus in the method flow shown in the above-mentioned method embodiment.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented using a software program, it may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The processes or functions according to embodiments of the present invention occur, in whole or in part, when computer-executable instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). Computer-readable storage media can be any available media that can be accessed by a computer or data storage device including one or more available media integrated servers, data centers, and the like. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), among others.
The above description is only an embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions within the technical scope of the present invention are intended to be covered by the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (15)

1. A method of object detection, comprising:
detecting a first object included in first point cloud data by adopting a target detection model to obtain a first boundary frame representing the position of the first object, the category of the first object and first target point cloud data included in the first boundary frame; the first point cloud data are obtained based on original point cloud data acquired by a radar on a vehicle in a current acquisition period;
clustering the first point cloud data to obtain a second boundary box representing the position of a second object, the category of the second object and second target point cloud data included by the second boundary box;
determining whether the first object and the second object are the same target object according to the categories of the first object and the second object and the intersection ratio of the first boundary box and the second boundary box;
if the first object and the second object are determined to be the same target object, determining third target point cloud data for identifying the target object from the first target point cloud data and the second target point cloud data based on the size relation of the first boundary box and the second boundary box;
and identifying the target object according to the third target point cloud data.
2. The target detection method of claim 1, wherein the clustering the first point cloud data to obtain a second bounding box representing a location of a second target, a category of the second target, and second target point cloud data included in the second bounding box comprises:
determining a first feature point belonging to a target plane in the first point cloud data, wherein the target plane is a plane where the ground is determined according to target road information;
deleting the first feature points from the first point cloud data to obtain a plurality of second feature points;
clustering the plurality of second feature points to obtain a cluster corresponding to a second object and a category label of the second object;
and determining a second boundary box representing the position of the second object and second target point cloud data included by the second boundary box according to the clustering cluster, and determining the category of the second object according to the category label.
3. The object detection method according to claim 2, wherein the determining a first feature point belonging to an object plane in the first point cloud data includes:
acquiring target road information within a preset range of a vehicle;
determining a plane equation of the target plane according to the coordinates of the road characteristic points in the target road information;
dividing each feature point in the first target point cloud data into a plurality of grids;
determining the distance between each grid and the target plane according to the characteristic points in each grid and the plane equation;
determining each feature point in the mesh as the first feature point if the distance is less than a first threshold.
4. The method of claim 3, wherein said determining a distance between each of said grids and said target plane based on feature points in each of said grids and said plane equation comprises:
determining the barycentric coordinates of each grid according to the coordinates of each feature point in each grid;
and determining the distance between each grid and the target plane according to each barycentric coordinate and the plane equation.
5. The object detection method according to any one of claims 1 to 4, wherein the determining whether the first object and the second object are the same object according to the categories of the first object and the second object and the intersection ratio of the first bounding box and the second bounding box includes:
if the first object and the second object are the same in category and the intersection ratio of the first bounding box and the second bounding box is larger than a second threshold, determining that the first object and the second object are the same target object;
and if the categories of the first object and the second object are different, or the intersection ratio of the first bounding box and the second bounding box is smaller than a second threshold, determining that the first object and the second object are different target objects.
6. The object detection method according to any one of claims 1 to 4, wherein the determining, from the first object point cloud data and the second object point cloud data, third object point cloud data for identifying the object based on a size-size relationship of the first bounding box and the second bounding box comprises:
determining a size relationship of the first bounding box and the second bounding box;
if the size of the second bounding box is larger than that of the first bounding box, reducing the size of the second bounding box to the size of the first bounding box to obtain a third bounding box;
deleting point cloud data outside the third bounding box in the second target point cloud data to obtain fourth target point cloud data;
and merging the first target point cloud data and the fourth target point cloud data to obtain third target point cloud data.
7. The object detection method of claim 6, wherein after determining the size relationship of the first bounding box to the second bounding box, the object detection method further comprises:
if the size of the second boundary frame is smaller than that of the first boundary frame, determining feature points, which do not belong to the first target point cloud data, in the second target point cloud data as noise points;
and deleting the noise point from the second target point cloud data to obtain third target point cloud data.
8. The object detection method according to any one of claims 1 to 4, wherein before detecting the first object included in the first point cloud data using the object detection model, the object detection method further comprises:
acquiring original point cloud data of a main radar arranged on a vehicle roof and original point cloud data of a plurality of auxiliary radars arranged on a vehicle body;
respectively carrying out motion compensation on the original point cloud data of the main radar and the original point cloud data of the auxiliary radar;
splicing the original point cloud data of the main radar and the original point cloud data of each auxiliary radar after motion compensation to obtain spliced point cloud data;
and deleting the non-relevant feature points out of the preset range of the vehicle in the spliced point cloud data to obtain first point cloud data.
9. The object detection method according to claim 8, wherein before motion-compensating the raw point cloud data of the primary radar and the raw point cloud data of the secondary radar, the object detection method further comprises:
a first transformation matrix of the vehicle from a vehicle coordinate system to a map coordinate system is determined relative to a target radar, the target radar being each of the primary radar and a plurality of secondary radars.
10. The method of claim 9, wherein the motion compensating the raw point cloud data of the primary radar comprises:
acquiring a second transformation matrix of the main radar from a map coordinate system to a main radar coordinate system and a third transformation matrix of the main radar from the main radar coordinate system to a vehicle coordinate system;
performing motion compensation on the original point cloud data of the main radar based on the first transformation matrix, the second transformation matrix and the third transformation matrix of the main radar;
the motion compensation of the original point cloud data of the secondary radar comprises the following steps:
acquiring a fourth transformation matrix of the secondary radar from a map coordinate system to a corresponding secondary radar coordinate system, a fifth transformation matrix of the secondary radar from the secondary radar coordinate system to a vehicle coordinate system, and a sixth transformation matrix of the secondary radar from the secondary radar coordinate system to a main radar coordinate system;
and performing motion compensation on the original point cloud data of the secondary radar based on the first transformation matrix, the fourth transformation matrix, the fifth transformation matrix and the sixth transformation matrix of the secondary radar.
11. The object detection method according to claim 9 or 10, wherein the preset range is a three-dimensional spatial range, and the deleting non-relevant feature points out of the preset range of the vehicle in the joined point cloud data to obtain first point cloud data includes:
determining feature points in the spliced point cloud data, which belong to the outside of a preset plane range of the vehicle, and feature points in the spliced point cloud data, of which the axis coordinates belong to the outside of a preset coordinate range, as non-relevant feature points;
and deleting the non-relevant feature points from the spliced point cloud data to obtain first point cloud data.
12. The object detection method according to claim 9 or 10, wherein after determining whether the first object and the second object are the same object, the object detection method further comprises:
if the first object and the second object are determined not to be the same target object, obtaining second point cloud data, third point cloud data and fourth point cloud data, wherein the second point cloud data, the third point cloud data and the fourth point cloud data are obtained respectively based on original point cloud data acquired by the radar in three continuous acquisition periods after the current acquisition period;
clustering the second point cloud data, the third point cloud data and the fourth point cloud data respectively to obtain a fourth boundary frame representing the position of a fourth object and the category of the fourth object, a fifth boundary frame representing the position of a fifth object and the category of the fifth object, and a sixth boundary frame representing the position of a sixth object and the category of the sixth object;
and determining whether the second object exists according to the categories of the second object, the fourth object, the fifth object and the sixth object and the intersection and comparison of every two bounding boxes in the second bounding box, the fourth bounding box, the fifth bounding box and the sixth bounding box.
13. An object detection device, comprising:
the first determining unit is used for detecting a first object included in the first point cloud data by adopting a target detection model to obtain a first boundary frame representing the position of the first object, the category of the first object and the first target point cloud data included in the first boundary frame; the first point cloud data is obtained based on original point cloud data acquired by a radar on a vehicle in a current acquisition period;
a second determining unit, configured to perform clustering processing on the first point cloud data to obtain a second boundary box indicating a position where a second object is located, a category of the second object, and second target point cloud data included in the second boundary box;
a third determining unit, configured to determine whether the first object and the second object are the same target object according to the categories of the first object and the second object and an intersection ratio of the first bounding box and the second bounding box;
the third determining unit is further configured to determine, if it is determined that the first object and the second object are the same target object, third target point cloud data for identifying the target object from the first target point cloud data and the second target point cloud data when the size of the second bounding box is larger than the size of the first bounding box;
the third determining unit is further configured to identify the target object according to the third target point cloud data.
14. An object detection device, characterized in that the object detection device comprises: a processor and a memory; the memory for storing computer program code, the computer program code comprising computer instructions; the object detection apparatus, when the processor executes the computer instructions, performs the object detection method according to any one of claims 1 to 12.
15. A computer readable storage medium comprising computer instructions which, when run on an object detection apparatus, cause the object detection apparatus to perform the object detection method of any one of claims 1 to 12.
CN202211065977.2A 2022-08-31 2022-08-31 Target detection method, device and storage medium Pending CN115457506A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211065977.2A CN115457506A (en) 2022-08-31 2022-08-31 Target detection method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211065977.2A CN115457506A (en) 2022-08-31 2022-08-31 Target detection method, device and storage medium

Publications (1)

Publication Number Publication Date
CN115457506A true CN115457506A (en) 2022-12-09

Family

ID=84301682

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211065977.2A Pending CN115457506A (en) 2022-08-31 2022-08-31 Target detection method, device and storage medium

Country Status (1)

Country Link
CN (1) CN115457506A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115965824A (en) * 2023-03-01 2023-04-14 安徽蔚来智驾科技有限公司 Point cloud data labeling method, point cloud target detection equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115965824A (en) * 2023-03-01 2023-04-14 安徽蔚来智驾科技有限公司 Point cloud data labeling method, point cloud target detection equipment and storage medium

Similar Documents

Publication Publication Date Title
CN108629292B (en) Curved lane line detection method and device and terminal
CN112513679B (en) Target identification method and device
CN109849930B (en) Method and device for calculating speed of adjacent vehicle of automatic driving automobile
CN110889974B (en) Intelligent parking space identification method and device and automobile
CN114677410A (en) Obstacle ranging method, mobile robot, equipment and medium
CN108052921B (en) Lane line detection method, device and terminal
CN115457506A (en) Target detection method, device and storage medium
CN114550142A (en) Parking space detection method based on fusion of 4D millimeter wave radar and image recognition
CN112036274A (en) Driving region detection method and device, electronic equipment and storage medium
CN108693517B (en) Vehicle positioning method and device and radar
CN114972427A (en) Target tracking method based on monocular vision, terminal equipment and storage medium
CN112820141B (en) Parking space detection method and system
CN114779276A (en) Obstacle detection method and device
CN114219770A (en) Ground detection method, ground detection device, electronic equipment and storage medium
CN113589324A (en) Unmanned vehicle gradient identification method and system based on laser radar and storage medium
CN113734176A (en) Environment sensing system and method for intelligent driving vehicle, vehicle and storage medium
CN116486130A (en) Obstacle recognition method, device, self-mobile device and storage medium
CN112529011A (en) Target detection method and related device
CN115601435B (en) Vehicle attitude detection method, device, vehicle and storage medium
EP3623994A2 (en) Method, device, apparatus and storage medium for detecting a height of an obstacle
CN115728772A (en) Laser scanning point type detection method and device and terminal equipment
CN115359332A (en) Data fusion method and device based on vehicle-road cooperation, electronic equipment and system
CN113514825A (en) Road edge obtaining method and device and terminal equipment
CN108416305B (en) Pose estimation method and device for continuous road segmentation object and terminal
CN113313654A (en) Laser point cloud filtering and denoising method, system, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination