CN115690475A - Target detection method and device, electronic equipment and readable storage medium - Google Patents

Target detection method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN115690475A
CN115690475A CN202211366669.3A CN202211366669A CN115690475A CN 115690475 A CN115690475 A CN 115690475A CN 202211366669 A CN202211366669 A CN 202211366669A CN 115690475 A CN115690475 A CN 115690475A
Authority
CN
China
Prior art keywords
target
tracking result
result
point cloud
confidence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211366669.3A
Other languages
Chinese (zh)
Inventor
张雪薇
葛栢林
王泮义
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Wanji Photoelectric Technology Co Ltd
Original Assignee
Wuhan Wanji Photoelectric Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Wanji Photoelectric Technology Co Ltd filed Critical Wuhan Wanji Photoelectric Technology Co Ltd
Priority to CN202211366669.3A priority Critical patent/CN115690475A/en
Publication of CN115690475A publication Critical patent/CN115690475A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Radar Systems Or Details Thereof (AREA)

Abstract

The application provides a target detection method, a target detection device, an electronic device and a readable storage medium. The method comprises the following steps: acquiring a current frame point cloud and a previous frame point cloud; performing target detection on the current frame point cloud by using a target detection algorithm based on deep learning to obtain a target detection result; performing clustering operation on the current frame point cloud to obtain a clustering result; tracking third targets of the previous frame of point cloud to obtain tracking results of all the third targets; matching the tracking result with each first target, and determining whether the first target which belongs to the same object with the tracking result exists or not; if the tracking result exists, outputting the information of the tracking result; if not, matching the tracking result with each second target, and determining whether the second target which belongs to the same object as the tracking result exists; and if so, outputting the information of the tracking result. The method solves the problem that the detection omission occurs due to the fact that samples are not abundant enough in the detection method based on deep learning.

Description

Target detection method and device, electronic equipment and readable storage medium
Technical Field
The present application belongs to the field of data processing technologies, and in particular, to a target detection method, an apparatus, an electronic device, and a readable storage medium.
Background
In the field of laser radars, a target detection algorithm based on deep learning relies on a backbone network to classify objects in point cloud data and acquire the position, size and course angle of the objects. Common target detection algorithms are SECOND, voxel-R-CNN, pointPillars, etc.
However, the target detection method based on deep learning is greatly influenced by the training set, and if the training set samples are not rich enough, the phenomenon of missing detection is easily caused in the actual detection of the trained deep learning network. That is, in the point cloud data, an object is not detected, and missing detection occurs.
Disclosure of Invention
The embodiment of the application provides a target detection method, a target detection device, electronic equipment and a readable storage medium, and can solve the problem that detection omission possibly occurs due to the fact that a training set sample is not rich enough in a deep learning-based target detection method.
In a first aspect, an embodiment of the present application provides a target detection method, including:
acquiring a current frame point cloud and a previous frame point cloud;
performing target detection on the current frame point cloud by using a target detection algorithm based on deep learning to obtain a target detection result, wherein the target detection result comprises information of each first target;
performing clustering operation on the current frame point cloud to obtain a clustering result, wherein the clustering result comprises clustering clusters of each second target;
tracking third targets of the previous frame of point cloud to obtain tracking results of the third targets, wherein the third targets are objects with confidence degrees larger than or equal to a first confidence degree threshold value;
for the tracking result of each third target, matching the tracking result with each first target, and determining whether the first target which belongs to the same object as the tracking result exists in the target detection result;
if the first target belonging to the same object exists, outputting the information of the first target;
if the first targets which belong to the same object do not exist, matching the tracking result with each second target, and determining whether the second targets which belong to the same object as the tracking result exist in the clustering result or not;
and if the second target belonging to the same object exists, outputting the information of the tracking result.
Optionally, the information of the first target includes a confidence level;
the method for detecting the target of the current frame point cloud by using the target detection algorithm based on the deep learning to obtain the target detection result comprises the following steps:
performing target detection on the current frame point cloud by using the target detection algorithm to obtain a target detection result;
screening a first high confidence target in the target detection result, wherein the confidence coefficient of the first high confidence target is greater than or equal to the first confidence coefficient threshold;
screening a second high confidence target, wherein the second high confidence target is a first target with the confidence coefficient larger than or equal to a second confidence coefficient threshold value, and the second confidence coefficient threshold value is smaller than the first confidence coefficient threshold value;
wherein the first target comprises the first high confidence target and the second high confidence target.
Optionally, the matching the tracking result with each of the first targets to determine whether the first target belonging to the same object as the tracking result exists in the target detection result includes:
calculating a first intersection-to-parallel ratio between the tracking result and each first high-confidence target;
if the first intersection ratio is larger than or equal to a preset intersection ratio threshold value, determining that the first target which belongs to the same object as the tracking result exists;
if all the first intersection ratios are smaller than the preset intersection ratio threshold value, calculating second intersection ratios between the tracking result and the second high-speed signal targets;
if the second intersection ratio is larger than or equal to the preset intersection ratio threshold, determining that the first target which belongs to the same object as the tracking result exists;
updating the confidence coefficient of the object to which the second high confidence target belongs, wherein the latest confidence coefficient of the object to which the second high confidence target belongs is greater than the first confidence coefficient threshold value;
and if all the second intersection ratios are smaller than the preset intersection ratio threshold, determining that the first target which belongs to the same object as the tracking result does not exist.
Optionally, after determining whether the second target that belongs to the same object as the tracking result exists in the clustering result, the method further includes:
and if the second target belonging to the same object does not exist, updating the track parameter value.
Optionally, the matching the tracking result with each second target to determine whether the second target belonging to the same object as the tracking result exists in the clustering result includes:
calculating a third intersection ratio of the tracking result and each second target;
if the third intersection ratio is larger than or equal to a preset intersection ratio threshold value, determining that the second target which belongs to the same object as the tracking result exists;
updating the confidence level of the tracking result, wherein the latest confidence level of the tracking result is greater than the first confidence level threshold value;
and if all the third intersection ratios are smaller than the preset intersection ratio threshold, determining that the second target which belongs to the same object as the tracking result does not exist.
Optionally, after the updating the trajectory parameter value, the method further includes:
if the updated track parameter value is smaller than the preset track parameter value, acquiring the next frame of point cloud as the current frame of point cloud, and entering the step: performing target detection on the current frame point cloud by using a target detection algorithm based on deep learning to obtain a target detection result;
and if the updated track parameter value is greater than or equal to the preset track parameter value, deleting the track corresponding to the tracking result.
Optionally, the first confidence threshold is determined according to a corresponding relationship between the confidence and the accuracy and the recall ratio.
In a second aspect, an embodiment of the present application provides an object detection apparatus, including:
the device comprises an acquisition unit, a processing unit and a control unit, wherein the acquisition unit is used for acquiring a current frame point cloud and a previous frame point cloud;
the point cloud processing unit is used for carrying out target detection on the current frame point cloud by utilizing a target detection algorithm based on deep learning to obtain a target detection result, and the target detection result comprises information of each first target;
the system is used for carrying out clustering operation on the current frame point cloud to obtain a clustering result, wherein the clustering result comprises clustering clusters of all second targets;
the system is used for tracking a third target of the previous frame of point cloud to obtain a tracking result of each third target, wherein the third target is an object of which the confidence coefficient is greater than or equal to a first confidence coefficient threshold;
a matching unit, configured to match the tracking result with each of the first targets according to the tracking result of each of the third targets, and determine whether the first target that belongs to the same object as the tracking result exists in the target detection result;
the information of the first target is output if the first target belongs to the same object;
the tracking result is matched with each second target if the first target which belongs to the same object does not exist, and whether the second target which belongs to the same object as the tracking result exists in the clustering result is determined;
and outputting the information of the tracking result if the second target belonging to the same object exists.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor, when executing the computer program, implements the method according to any one of the above first aspects.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, in which a computer program is stored, and the computer program, when executed by a processor, implements the method according to any one of the first aspect.
In a fifth aspect, the present application provides a computer program product, which when run on an electronic device, causes the electronic device to execute the method of any one of the above first aspects.
It is to be understood that, for the beneficial effects of the second aspect to the fifth aspect, reference may be made to the relevant description in the first aspect, and details are not described herein again.
Compared with the prior art, the embodiment of the application has the advantages that:
the method comprises the steps of performing target detection on current frame point cloud by using a target detection algorithm based on deep learning to obtain a target detection result, wherein the target detection result comprises information of each first target; tracking third targets of the previous frame of point cloud to obtain tracking results of all the third targets, wherein the third targets are objects with confidence degrees larger than a first confidence degree threshold value; and matching the tracking result with each first target according to the tracking result of each third target, determining whether the first target which belongs to the same object as the tracking result exists in the target detection result, combining the tracking result with the target detection result, accurately obtaining the state of the object in the current frame point cloud, and simultaneously reducing the object subjected to false detection in the current frame point cloud.
And performing clustering operation on the current frame point cloud to obtain a clustering result, wherein the clustering result comprises clustering clusters of each second target. The clustering operation can detect objects missed by the target detection algorithm based on deep learning by clustering each cluster point to detect the objects.
Determining whether a second target which belongs to the same object as the tracking result exists in the clustering result or not by matching the tracking result with each second target; if a second target belonging to the same object exists, the tracking result information is output, the information of the object which is not detected can be obtained, and the recall rate of the target detection algorithm based on deep learning is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings required for the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a first schematic flowchart of a target detection method according to an embodiment of the present application;
FIG. 2 is a diagram illustrating a mapping relationship provided in an embodiment of the present application;
FIG. 3 is a second flowchart of a target detection method according to an embodiment of the present disclosure;
FIG. 4 is a schematic structural diagram of an object detection apparatus according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items and includes such combinations.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
Fig. 1 is a schematic flowchart of a first target detection method according to an embodiment of the present disclosure. As shown in fig. 1, the method includes:
s11: and acquiring a current frame point cloud and a previous frame point cloud.
In application, the surrounding environment is scanned through a radar to obtain the point cloud of the current moment so as to obtain the point cloud of the current frame. And acquiring the point cloud of the previous moment to obtain the point cloud of the previous frame. At least one object is included in the point cloud.
S12: and carrying out target detection on the current frame point cloud by using a target detection algorithm based on deep learning to obtain a target detection result.
Wherein the target detection result includes information of each first target.
In application, target detection can be performed on the current frame point cloud through a high-performance 3D target detection network such as a Voxel-R-CNN and PointPillars, and a target detection result is obtained.
The information of the first target is information of an object obtained through a target detection algorithm based on deep learning in the point cloud. The information of the first target comprises the category of the target frame, the center coordinate of the target frame, the size information of the target frame, the course angle and the confidence coefficient. The object box contains an object.
S13: and carrying out clustering operation on the current frame point cloud to obtain a clustering result.
And the clustering result comprises clustering clusters of the second targets.
In application, the cluster of the second target is a cluster of objects obtained through a clustering algorithm in the point cloud.
Through clustering algorithms such as k-means and DBSCAN, in the current frame point cloud, the points with the point spacing smaller than the clustering radius are gathered to obtain a clustering cluster, and the points of the clustering cluster meet the requirement of the minimum clustering points so as to obtain a clustering result. Due to the characteristics of the clustering algorithm, the object in the point cloud can be accurately detected.
S14: and tracking the third target of the previous frame of point cloud to obtain the tracking result of each third target.
Wherein the third target is an object with a confidence greater than or equal to the first confidence threshold. The tracking result is the expression of the third target in the current frame point cloud, and provides a basis for matching with the first target and the second target. The information of the tracking result comprises the category of the target frame, the center coordinate of the target frame, the size information of the target frame, the course angle and the confidence coefficient. The object box contains an object.
In application, the first confidence threshold is used to identify objects that belong to real, user interest.
The third target of the previous frame of point cloud can be tracked through a Kalman filter and a Hungarian matching algorithm to obtain a tracking result, and a basis is provided for determining whether a missed detection object exists in the target detection result.
And determining the first confidence coefficient threshold value according to the corresponding relation between the confidence coefficient and the accuracy rate and the recall rate. Specifically, the confidence degrees corresponding to the high accuracy and the high recall rate are set as a first confidence degree. Fig. 2 is a schematic diagram of a corresponding relationship provided in an embodiment of the present application. As shown in fig. 2, the accuracy increases with increasing confidence, and the recall decreases with increasing confidence. As can be seen from the figure, the confidence optimal value is 0.39, and the accuracy and the recall rate of the corresponding optimal value are relatively high.
S15: and matching the tracking result with each first target according to the tracking result of each third target, and determining whether the first target which belongs to the same object as the tracking result exists in the target detection result.
In application, the tracking result is matched with each first target, and if the tracking result is matched with one of the first targets, the tracking result and the first target are determined to belong to the same object. And if the tracking result is not matched with each first target, determining that no first target which belongs to the same object as the tracking result exists.
S16: and if the first target belongs to the same object, outputting the information of the first target.
In application, when it is determined that a first target belonging to the same object as the tracking result exists, the first target of the current frame point cloud can be used as an accurate result. And outputting the information of the first target as a detection result of the object in the current frame point cloud.
S17: and if the first targets which belong to the same object do not exist, matching the tracking result with each second target, and determining whether the second targets which belong to the same object as the tracking result exist in the clustering result.
In application, when it is determined that the first target belonging to the same object as the tracking result does not exist, it is indicated that the target detection algorithm based on deep learning may have missed detection, and whether there is a missed detected object needs to be determined.
And matching the tracking result with each second target. And if the tracking result is matched with one second target, determining that the tracking result and the second target belong to the same object. And if the tracking result is not matched with each second target, determining that no second target which belongs to the same object as the tracking result exists.
S18: and if a second target belonging to the same object exists, outputting the information of the tracking result.
In application, a second target belonging to the same object exists, which indicates that a missed detection object does exist in the current frame point cloud, and the missed detection object is the second target. However, the second target is a clustering result, and the corresponding object information is inaccurate. And outputting the information of the tracking result as a detection result of the object in the current frame point cloud to obtain a detection result of the missed detection object in the current frame point cloud.
It can be understood that the target detection algorithm based on deep learning is greatly influenced by the data set, and when the data amount of the data set is insufficient, a missing detection situation may occur. And the clustering algorithm is to gather the points with the point spacing smaller than the clustering radius to obtain a clustering cluster, and the point number of the clustering cluster meets the requirement of the minimum point number of the clustering, so that the objects in the point cloud are all detected. This provides a basis for confirming whether there is a missed object.
On the basis of the target detection result, the missed detection object can be found by combining the tracking result obtained by the tracking algorithm and the clustering result of the clustering algorithm, so that the detection results of all objects in the current frame point cloud are obtained, and the recall rate of the target detection algorithm based on deep learning is improved.
In the embodiment, target detection is performed on the current frame point cloud by using a target detection algorithm based on deep learning to obtain a target detection result, wherein the target detection result comprises information of each first target; tracking third targets of the previous frame of point cloud to obtain tracking results of all the third targets, wherein the third targets are objects with confidence degrees larger than a first confidence degree threshold value; and matching the tracking result with each first target according to the tracking result of each third target, determining whether the first target which belongs to the same object as the tracking result exists in the target detection result, combining the tracking result with the target detection result, accurately obtaining the state of the object in the current frame point cloud, and simultaneously reducing the object subjected to false detection in the current frame point cloud.
And performing clustering operation on the current frame point cloud to obtain a clustering result, wherein the clustering result comprises clustering clusters of each second target. The clustering operation can detect objects missed by the target detection algorithm based on deep learning by clustering each cluster point to detect the objects.
Determining whether a second target which belongs to the same object as the tracking result exists in the clustering result by matching the tracking result with each second target; if a second target belonging to the same object exists, the tracking result information is output, the information of the object which is not detected can be obtained, and the recall rate of the target detection algorithm based on deep learning is improved.
In one embodiment, the information of the first target includes a confidence level;
step S12, comprising:
s121: and carrying out target detection on the current frame point cloud by using a target detection algorithm to obtain a target detection result.
S122: and screening a first high confidence target in the target detection result, wherein the confidence coefficient of the first high confidence target is greater than or equal to a first confidence coefficient threshold value.
In application, the target detection result is filtered by a first confidence threshold. And screening out the first targets which are greater than or equal to the first confidence coefficient threshold value to obtain first high confidence coefficient targets.
S123: and screening a second high confidence target, wherein the second high confidence target is a first target with the confidence coefficient greater than or equal to a second confidence coefficient threshold value, and the second confidence coefficient threshold value is smaller than the first confidence coefficient threshold value.
Wherein the first target comprises a first high confidence target and a second high confidence target.
In application, after the first high confidence target is screened, the result after screening is filtered by a second confidence threshold. And screening out the first targets which are smaller than the first confidence coefficient threshold and larger than or equal to the second confidence coefficient threshold to obtain second high confidence coefficient targets.
And screening a first target in the target detection result through the first confidence coefficient threshold and the second confidence coefficient threshold so as to screen out a real object which is interested by the user.
Step S15, comprising:
s151: and calculating a first intersection ratio between the tracking result and each first high confidence target.
In application, a first intersection ratio of a target frame of a tracking result and a target frame of a first high confidence target is calculated, and the intersection ratio is calculated by dividing the area of the overlapped part of the two target frames by the area of the combined two targets.
S152: and if the first intersection ratio is larger than or equal to a preset intersection ratio threshold value, determining that a first target which belongs to the same object with the tracking result exists.
In application, a first intersection ratio of the tracking result and one of the first high-confidence targets is determined to be greater than or equal to a preset intersection ratio threshold value, so that the tracking result is matched with the first high-confidence target and belongs to the same object, and the first target which belongs to the same object as the tracking result is correspondingly determined to exist. And outputting the information of the first high confidence target as the detection result of the object in the current frame point cloud.
S153: and if all the first cross-over ratios are smaller than a preset cross-over ratio threshold value, calculating second cross-over ratios between the tracking result and each second high-confidence target.
In application, the first intersection-to-parallel ratio of the tracking result and each first high-confidence target is smaller than a preset intersection-to-parallel ratio threshold, which indicates that the tracking result is not matched with each first high-confidence target. And then matching the tracking result with each second high-confidence target, and calculating a second intersection and comparison between the tracking result and each second high-confidence target so as to determine the second high-confidence targets which belong to the same object as the tracking result.
S154: and if the second intersection ratio is larger than or equal to the preset intersection ratio threshold, determining that a first target which belongs to the same object with the tracking result exists.
In application, it is determined that a second intersection ratio of the tracking result and one of the second high-confidence targets is greater than or equal to a preset intersection ratio threshold, which indicates that the tracking result is matched with the second high-confidence target and belongs to the same object, and it is correspondingly determined that a first target which belongs to the same object as the tracking result exists. And outputting the information of the second high confidence target as the detection result of the object in the current frame point cloud.
S155: and updating the confidence coefficient of the object to which the second high confidence target belongs, wherein the latest confidence coefficient of the object to which the second high confidence target belongs is greater than the first confidence coefficient threshold value.
In application, the confidence of the object to which the second high confidence target belongs is improved, so that the second high confidence target can be used as an execution object of the tracking algorithm in the following.
S156: and if all the second intersection ratios are smaller than a preset intersection ratio threshold value, determining that the first target which belongs to the same object with the tracking result does not exist.
In application, the second intersection ratio of the tracking result and each second high-confidence target is smaller than a preset intersection ratio threshold, which indicates that the tracking result is not matched with each second high-confidence target, and correspondingly, the first target which belongs to the same object as the tracking result does not exist. This illustrates the possibility of missed detection in the target detection algorithm based on deep learning.
The tracking result is matched with the first high-confidence target firstly, and is matched with the second high-confidence target after being not matched with the first high-confidence target, so that the time for matching the tracking result with the first target can be shortened.
Step S17, comprising:
s171: and calculating a third intersection ratio of the tracking result and each second target.
S172: and if the third intersection ratio is larger than or equal to the preset intersection ratio threshold, determining that a second target which belongs to the same object with the tracking result exists.
In application, it is determined that a third intersection ratio of the tracking result and one of the second targets is greater than or equal to a preset intersection ratio threshold, which indicates that the tracking result is matched with the second target and belongs to the same object, and it is correspondingly determined that the second target which belongs to the same object as the tracking result exists. This illustrates that the target detection algorithm based on deep learning does have missing detection. And outputting the information of the tracking result as the detection result of the point cloud of the missed detection object in the current frame. The tracking result obtained by the tracking algorithm can make up for the defect that the target information in the clustering result is inaccurate, so that the detection result of the missed detection object in the current frame point cloud is obtained.
S173: updating the confidence coefficient of the tracking result, wherein the latest confidence coefficient of the tracking result is greater than a first confidence coefficient threshold value;
in application, the confidence of the tracking result is improved, so that an object in a target frame of the tracking result is used as an execution object of the tracking algorithm.
S174: and if all the third intersection ratios are smaller than a preset intersection ratio threshold value, determining that a second target which belongs to the same object with the tracking result does not exist.
In application, the third intersection ratio of the tracking result and each second target is smaller than a preset intersection ratio threshold, which indicates that the tracking result is not matched with each second target, and it is correspondingly determined that no second target which belongs to the same object as the tracking result exists. This illustrates a situation where the deep learning based object detection algorithm may not have missed detections.
Fig. 3 is a second flowchart of a target detection method according to an embodiment of the present application. As shown in fig. 3, after step S17, the method further includes:
s19: and if the second target belonging to the same object does not exist, updating the track parameter value.
In the application, although there is no second target belonging to the same object, there may be no missing detection. But in order to ensure that the missed detection condition does not exist, the tracking result and the track thereof are reserved for subsequent tracking so as to check whether the object in the target frame of the tracking result is the missed detection object.
If the target is not the object of missed detection, the tracking result and the track thereof are always kept and tracked, which causes a problem in detection. And setting a track parameter, wherein the track parameter value represents the tracking state of the object in the tracking result so as to judge that the object does not belong to the object which is not detected.
S20: if the updated track parameter value is smaller than the preset track parameter value, acquiring the next frame of point cloud as the current frame of point cloud, and entering the step: and carrying out target detection on the current frame point cloud by using a target detection algorithm based on deep learning to obtain a target detection result.
In the application, the updated trajectory parameter value is smaller than the preset trajectory parameter value, which indicates that the number of unmatched frames of the object does not reach the preset threshold value in the state of continuously tracking the object, and at this time, it cannot be determined that the object in the target frame of the tracking result does not belong to the object which is not detected. Therefore, tracking is continued, and the surrounding environment is scanned by the radar at the next moment to obtain the point cloud at the next moment so as to obtain the point cloud of the next frame. And taking the next frame point cloud at the next moment as the current frame point cloud, taking the current frame point cloud at the current moment as the previous frame point cloud, and continuing target detection.
S21: and if the updated track parameter value is greater than or equal to the preset track parameter value, deleting the track corresponding to the tracking result.
In application, the updated track parameter value is greater than or equal to the preset track parameter value, which indicates that the number of unmatched frames of the object reaches the preset threshold value in the state of continuously tracking the object, and at this time, it can be determined that the object in the target frame of the tracking result does not belong to the object which is not detected, and the tracking is not performed any more. Thus, the track is deleted.
Illustratively, in the first frame, a third object A1 (the ID of the object is 1) has a confidence level greater than or equal to the first confidence level threshold. And tracking the third target of the first frame by using a tracking algorithm, wherein the tracking result of the third target A1 in the second frame is T2, and matching the tracking result T2 with the target of the second frame. And if the target matched with the tracking result T2 exists, outputting the result of the second frame, wherein the result is the first target A2 or the tracking result T2. If there is no target matching the tracking result T2, the track parameter value +1 (initial value is 0) of the object whose ID is 1 is updated to the track parameter value of 1, and the result of the second frame is output: the result T2 is tracked. Wherein the updated trajectory parameter value of 1 indicates that the number of frames for which the object with the ID of 1 is not matched is 1.
Assuming that the second frame has no target matching the tracking result T2, the trajectory parameter value is 1. And tracking the tracking result T2 of the second frame by using a tracking algorithm, wherein the tracking result T2 is a third target, and the tracking result of the tracking result T2 in the third frame is T3. And matching the tracking result T3 with the target of the second frame. And if the target matched with the tracking result T3 exists, outputting the result of the third frame, wherein the result is the first target A3 or the tracking result T3, and setting the track parameter value of the object with the ID of 1 to be 0. If there is no target matching the tracking result T3, the trajectory parameter value of the object having ID 1 +1 is updated to 2, and the number of frames indicating that the object having ID 1 does not match is 2.
And continuing to track the object with the ID of 1 according to the steps until the updated track parameter value is larger than the preset track parameter value. And when the updated track parameter value is larger than the preset track parameter value, determining that the object with the ID of 1 does not belong to the object which is not detected, deleting the track of the object with the ID of 1, and not tracking any more.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Corresponding to the methods described in the above embodiments, only the portions related to the embodiments of the present application are shown for convenience of explanation.
Fig. 4 is a schematic structural diagram of an object detection apparatus according to an embodiment of the present application. As shown in fig. 4, the apparatus includes:
an acquiring unit 10, configured to acquire a current frame point cloud and a previous frame point cloud;
the point cloud processing unit 11 is configured to perform target detection on the current frame point cloud by using a target detection algorithm based on deep learning to obtain a target detection result, where the target detection result includes information of each first target;
the system comprises a clustering unit, a first target, a second target and a third target, wherein the clustering unit is used for carrying out clustering operation on current frame point clouds to obtain clustering results, and the clustering results comprise clustering clusters of the second targets;
the system comprises a first confidence coefficient threshold, a second confidence coefficient threshold, a third target and a third target tracking unit, wherein the first confidence coefficient threshold is greater than or equal to the confidence coefficient of the first frame of point cloud;
the matching unit 12 is configured to match the tracking result with each first target according to the tracking result of each third target, and determine whether a first target that belongs to the same object as the tracking result exists in the target detection result;
the information processing device is used for outputting the information of a first target if the first target belongs to the same object;
the tracking result is matched with each second target if the first target belonging to the same object does not exist, and whether the second target belonging to the same object as the tracking result exists in the clustering result or not is determined;
and outputting the tracking result if a second target belonging to the same object exists.
In an embodiment, the point cloud processing unit is specifically configured to perform target detection on a current frame point cloud by using a target detection algorithm to obtain a target detection result;
screening a first high confidence target in the target detection result, wherein the confidence coefficient of the first high confidence target is greater than or equal to a first confidence coefficient threshold value;
screening a second high confidence target, wherein the second high confidence target is a first target with the confidence coefficient larger than or equal to a second confidence coefficient threshold value, and the second confidence coefficient threshold value is smaller than the first confidence coefficient threshold value;
wherein the first target comprises a first high confidence target and a second high confidence target.
In one embodiment, the matching unit is specifically configured to calculate a first intersection ratio between the tracking result and each first high confidence object;
if the first intersection ratio is larger than or equal to a preset intersection ratio threshold value, determining that a first target which belongs to the same object with the tracking result exists;
if all the first cross-over ratios are smaller than a preset cross-over ratio threshold value, second cross-over ratios between the tracking result and each second high-confidence target are calculated;
if the second intersection ratio is larger than or equal to a preset intersection ratio threshold, determining that a first target which belongs to the same object with the tracking result exists;
updating the confidence coefficient of the object to which the second high confidence target belongs, wherein the latest confidence coefficient of the object to which the second high confidence target belongs is greater than the first confidence coefficient threshold;
and if the second intersection ratio is smaller than the preset intersection ratio threshold, determining that the first target which belongs to the same object with the tracking result does not exist.
In an embodiment, the matching unit is specifically configured to calculate a third intersection ratio of the tracking result and each second target.
If the third intersection ratio is larger than or equal to the preset intersection ratio threshold, determining that a second target which belongs to the same object with the tracking result exists;
updating the confidence coefficient of the tracking result, wherein the latest confidence coefficient of the tracking result is greater than a first confidence coefficient threshold value;
if all the third intersection ratios are smaller than a preset intersection ratio threshold value, determining that a second target which belongs to the same object as the tracking result does not exist;
in an embodiment, the matching unit is further configured to update the trajectory parameter value if there is no second object belonging to the same object.
In an embodiment, the matching unit is further configured to, if the updated trajectory parameter value is smaller than the preset trajectory parameter value, obtain a next frame of point cloud as a current frame of point cloud, and perform the following steps: performing target detection on the current frame point cloud by using a target detection algorithm based on deep learning to obtain a target detection result;
and if the updated track parameter value is greater than or equal to the preset track parameter value, deleting the track corresponding to the tracking result.
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the application. As shown in fig. 5, the electronic apparatus 2 of this embodiment includes: at least one processor 20 (only one shown in fig. 5), a memory 21, and a computer program 22 stored in the memory 21 and executable on the at least one processor 20, the steps of any of the various method embodiments described above being implemented when the computer program 22 is executed by the processor 20.
The electronic device 2 may be a desktop computer, a notebook, a palm computer, and a vehicle-mounted device installed on a vehicle. The electronic device 2 may include, but is not limited to, a processor 20 and a memory 21. Those skilled in the art will appreciate that fig. 5 is merely an example of the electronic device 2, and does not constitute a limitation of the electronic device 2, and may include more or less components than those shown, or combine some of the components, or different components, such as an input-output device, a network access device, etc.
The Processor 20 may be a Central Processing Unit (CPU), and the Processor 20 may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 21 may in some embodiments be an internal storage unit of the electronic device 2, such as a hard disk or a memory of the electronic device 2. The memory 21 may also be an external storage device of the electronic device 2 in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the electronic device 2. Further, the memory 21 may also include both an internal storage unit and an external storage device of the electronic device 2. The memory 21 is used for storing an operating system, an application program, a BootLoader (BootLoader), data, and other programs, such as program codes of the computer program. The memory 21 may also be used to temporarily store data that has been output or is to be output.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the steps in the above-mentioned method embodiments may be implemented.
Embodiments of the present application provide a computer program product, which when executed on an electronic device, enables the electronic device to implement the steps in the above method embodiments.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal device, recording medium, computer Memory, read-Only Memory (ROM), random Access Memory (RAM), electrical carrier wave signals, telecommunication signals, and software distribution medium. Such as a usb-drive, a removable hard drive, a magnetic or optical disk, etc. In some jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and proprietary practices.
In the above embodiments, the description of each embodiment has its own emphasis, and reference may be made to the related description of other embodiments for parts that are not described or recited in any embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other ways. For example, the above-described apparatus/network device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical function division, and other divisions may be realized in practice, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A method of object detection, comprising:
acquiring a current frame point cloud and a previous frame point cloud;
performing target detection on the current frame point cloud by using a target detection algorithm based on deep learning to obtain target detection results, wherein the target detection results comprise information of each first target;
performing clustering operation on the current frame point cloud to obtain a clustering result, wherein the clustering result comprises clustering clusters of each second target;
tracking third targets of the previous frame of point cloud to obtain tracking results of the third targets, wherein the third targets are objects with confidence degrees larger than or equal to a first confidence degree threshold value;
for the tracking result of each third target, matching the tracking result with each first target, and determining whether the first target belonging to the same object as the third target exists in the target detection results;
if the first target belonging to the same object exists, outputting the information of the first target;
if the first targets which belong to the same object do not exist, matching the tracking result with each second target, and determining whether the second targets which belong to the same object as the tracking result exist in the clustering result;
and if the second target belonging to the same object exists, outputting the information of the tracking result.
2. The method of claim 1, wherein the information of the first target comprises a confidence level;
the method for detecting the target of the current frame point cloud by using the target detection algorithm based on the deep learning to obtain the target detection result comprises the following steps:
performing target detection on the current frame point cloud by using the target detection algorithm to obtain a target detection result;
screening a first high confidence target in the target detection result, wherein the confidence degree of the first high confidence target is greater than or equal to the first confidence degree threshold value;
screening a second high confidence target, wherein the second high confidence target is a first target with the confidence coefficient larger than or equal to a second confidence coefficient threshold value, and the second confidence coefficient threshold value is smaller than the first confidence coefficient threshold value;
wherein the first target comprises the first high confidence target and the second high confidence target.
3. The method of claim 2, wherein the matching the tracking result with each of the first targets to determine whether the first target belonging to the same object as the tracking result exists in the target detection result comprises:
calculating a first intersection-to-parallel ratio between the tracking result and each first high-confidence target;
if the first intersection ratio is larger than or equal to a preset intersection ratio threshold value, determining that the first target which belongs to the same object as the tracking result exists;
if all the first intersection ratios are smaller than the preset intersection ratio threshold value, calculating second intersection ratios between the tracking result and the second high-speed signal targets;
if the second intersection ratio is larger than or equal to the preset intersection ratio threshold, determining that the first target which belongs to the same object as the tracking result exists;
updating the confidence coefficient of the object to which the second high confidence target belongs, wherein the latest confidence coefficient of the object to which the second high confidence target belongs is greater than the first confidence coefficient threshold value;
and if all the second intersection ratios are smaller than the preset intersection ratio threshold, determining that the first target which belongs to the same object as the tracking result does not exist.
4. The method of claim 1, wherein after the determining whether the second target belonging to the same object as the tracking result exists in the clustering result, further comprising:
and if the second target belonging to the same object does not exist, updating the track parameter value.
5. The method of claim 4, wherein the matching the tracking result with each of the second targets to determine whether the second target belonging to the same object as the tracking result exists in the clustering result comprises:
calculating a third intersection ratio of the tracking result and each second target;
if the third intersection ratio is larger than or equal to a preset intersection ratio threshold value, determining that the second target which belongs to the same object as the tracking result exists;
updating the confidence level of the tracking result, wherein the latest confidence level of the tracking result is greater than the first confidence level threshold value;
and if all the third intersection ratios are smaller than the preset intersection ratio threshold, determining that the second target which belongs to the same object as the tracking result does not exist.
6. The method of claim 4 or 5, wherein after updating the trajectory parameter values, further comprising:
if the updated track parameter value is smaller than the preset track parameter value, acquiring the next frame of point cloud as the current frame of point cloud, and entering the step: performing target detection on the current frame point cloud by using a target detection algorithm based on deep learning to obtain a target detection result;
and if the updated track parameter value is greater than or equal to the preset track parameter value, deleting the track corresponding to the tracking result.
7. The method of claim 1, wherein: and the first confidence threshold value is determined according to the corresponding relation between the confidence and the accuracy and the recall rate.
8. An object detection device, comprising:
the device comprises an acquisition unit, a processing unit and a control unit, wherein the acquisition unit is used for acquiring a current frame point cloud and a previous frame point cloud;
the point cloud processing unit is used for carrying out target detection on the current frame point cloud by utilizing a target detection algorithm based on deep learning to obtain a target detection result, and the target detection result comprises information of each first target;
the system is used for carrying out clustering operation on the current frame point cloud to obtain a clustering result, wherein the clustering result comprises clustering clusters of all second targets;
the system comprises a first frame point cloud, a second frame point cloud, a third target and a third target tracking unit, wherein the first frame point cloud is used for tracking the third target of the previous frame point cloud to obtain a tracking result of each third target, and the third target is an object with a confidence coefficient larger than or equal to a first confidence coefficient threshold;
a matching unit, configured to match the tracking result with each of the first targets according to the tracking result of each of the third targets, and determine whether the first target that belongs to the same object as the tracking result exists in the target detection result;
the information of the first target is output if the first target belongs to the same object;
the tracking result is matched with each second target if the first target belonging to the same object does not exist, and whether the second target belonging to the same object as the tracking result exists in the clustering result is determined;
and outputting the information of the tracking result if the second target belonging to the same object exists.
9. An electronic device comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the method of any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN202211366669.3A 2022-11-01 2022-11-01 Target detection method and device, electronic equipment and readable storage medium Pending CN115690475A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211366669.3A CN115690475A (en) 2022-11-01 2022-11-01 Target detection method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211366669.3A CN115690475A (en) 2022-11-01 2022-11-01 Target detection method and device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN115690475A true CN115690475A (en) 2023-02-03

Family

ID=85048980

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211366669.3A Pending CN115690475A (en) 2022-11-01 2022-11-01 Target detection method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN115690475A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117111019A (en) * 2023-10-25 2023-11-24 深圳市先创数字技术有限公司 Target tracking and monitoring method and system based on radar detection
CN117252899A (en) * 2023-09-26 2023-12-19 探维科技(苏州)有限公司 Target tracking method and device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117252899A (en) * 2023-09-26 2023-12-19 探维科技(苏州)有限公司 Target tracking method and device
CN117252899B (en) * 2023-09-26 2024-05-17 探维科技(苏州)有限公司 Target tracking method and device
CN117111019A (en) * 2023-10-25 2023-11-24 深圳市先创数字技术有限公司 Target tracking and monitoring method and system based on radar detection
CN117111019B (en) * 2023-10-25 2024-01-09 深圳市先创数字技术有限公司 Target tracking and monitoring method and system based on radar detection

Similar Documents

Publication Publication Date Title
CN115690475A (en) Target detection method and device, electronic equipment and readable storage medium
CN111045008B (en) Vehicle millimeter wave radar target identification method based on widening calculation
CN111427032B (en) Room wall contour recognition method based on millimeter wave radar and terminal equipment
CN109002820B (en) License plate recognition method and device and related equipment
CN113009441B (en) Method and device for identifying multipath target of radar moving reflecting surface
CN112580734B (en) Target detection model training method, system, terminal equipment and storage medium
CN112526470A (en) Method and device for calibrating radar parameters, electronic equipment and storage medium
CN112763993A (en) Method and device for calibrating radar parameters, electronic equipment and storage medium
CN110646798B (en) Target track association method, radar and terminal equipment
CN114638294A (en) Data enhancement method and device, terminal equipment and storage medium
CN108693517B (en) Vehicle positioning method and device and radar
CN113723467A (en) Sample collection method, device and equipment for defect detection
CN111860623A (en) Method and system for counting tree number based on improved SSD neural network
WO2023006101A1 (en) Target detection method and apparatus based on laser scanning, and target detection terminal
CN115406452A (en) Real-time positioning and mapping method, device and terminal equipment
CN112416128B (en) Gesture recognition method and terminal equipment
CN114241195A (en) Target identification method and device, electronic equipment and storage medium
CN109213322B (en) Method and system for gesture recognition in virtual reality
CN113205059A (en) Parking space detection method, system, terminal and computer readable storage medium
CN113009467A (en) Radar blind area target detection tracking method and device and terminal equipment
CN110609561A (en) Pedestrian tracking method and device, computer readable storage medium and robot
CN113763305A (en) Method and device for calibrating article defects and electronic equipment
CN113050057B (en) Personnel detection method and device and terminal equipment
CN117495916B (en) Multi-target track association method, device, communication equipment and storage medium
CN113030896B (en) Radar target clustering method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination