CN117934549A - 3D multi-target tracking method based on probability distribution guiding data association - Google Patents
3D multi-target tracking method based on probability distribution guiding data association Download PDFInfo
- Publication number
- CN117934549A CN117934549A CN202410061124.4A CN202410061124A CN117934549A CN 117934549 A CN117934549 A CN 117934549A CN 202410061124 A CN202410061124 A CN 202410061124A CN 117934549 A CN117934549 A CN 117934549A
- Authority
- CN
- China
- Prior art keywords
- track
- detection
- state
- target
- association
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000009826 distribution Methods 0.000 title claims abstract description 57
- 238000000034 method Methods 0.000 title claims abstract description 23
- 238000001514 detection method Methods 0.000 claims abstract description 97
- 239000011159 matrix material Substances 0.000 claims description 14
- 238000004364 calculation method Methods 0.000 claims description 13
- 239000013598 vector Substances 0.000 claims description 5
- 238000001914 filtration Methods 0.000 claims description 4
- 238000012163 sequencing technique Methods 0.000 claims description 3
- 229960001948 caffeine Drugs 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- RYYVLZVUVIJVGH-UHFFFAOYSA-N trimethylxanthine Natural products CN1C(=O)N(C)C(=O)C2=C1N=CN2C RYYVLZVUVIJVGH-UHFFFAOYSA-N 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/18—Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Computational Mathematics (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Algebra (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Operations Research (AREA)
- Probability & Statistics with Applications (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Radar Systems Or Details Thereof (AREA)
Abstract
The invention relates to a 3D multi-target tracking method based on probability distribution guiding data association, and belongs to the field of environment awareness. The method comprises the following steps: s1: obtaining information of a target object at each moment by using a 3D detector; s2: constructing a Gaussian distribution model of the detection target and the track target; s3: according to the target distribution model constructed in the step S2, the position stability among objects is fully considered, a new cost function is designed, and the optimal matching of the detection frame and the track is realized; s4: initializing a detection target with the detection confidence coefficient larger than a threshold value on the unmatched detection target as a new track, wherein the track state is an uncertain state, and converting into a definite state track if the matching is successful for 3 frames; and determining that the state track is not matched and is converted into a lost state track, and deleting the track if 12 continuous frames are not matched.
Description
Technical Field
The invention belongs to the field of environment awareness, and relates to a 3D multi-target tracking method based on probability distribution guiding data association.
Background
3D multi-target tracking is a core technology of intelligent driving environment perception. The TBD (tracking-by-detection) framework is the currently mainstream tracking framework, which decomposes the multi-target tracking task into two subtasks of detection and tracking, and realizes multi-target tracking by performing accurate detection and effective data association on a frame-by-frame basis.
In a multi-target tracking approach based on a TBD framework, to better enable data correlation, some predictive components are typically used to enable smooth propagation of the trajectory. In existing mainstream tracking methods, uncertainty models are typically used to predict the trajectory, which can be implemented, for example, using a kalman filter. Since there is inherently uncertainty in the motion of the tracked object, it is naturally reasonable to use Kalman filtering for trajectory prediction. However, most existing algorithms for multi-target tracking based on TBD frameworks do not employ the uncertainty model described above, but instead use a deterministic detection and a deterministic trajectory for the correlation during the data correlation phase. The method ignores uncertainty of track and detection, does not truly represent inherent properties of the target to be tracked and the detection of the sensor, and reduces effectiveness and reliability of similarity calculation.
Thus, there is a need for an accurate target tracking method.
Disclosure of Invention
In view of the above, the present invention aims to provide a 3D multi-target tracking method based on probability distribution guiding data association, which is used for better modeling a target state and improving association accuracy.
In order to achieve the above purpose, the present invention provides the following technical solutions:
A3D multi-target tracking method based on probability distribution guiding data association specifically comprises the following steps:
S1: acquiring target object information at each moment by using a detector, wherein the target object information comprises position information, size information, orientation information and detection confidence;
S2: for a detection target, constructing a detection target gaussian distribution model by using information acquired from a detector; for a track target, estimating the state of the track target at the current moment by using a Kalman filter, and constructing a track target Gaussian distribution model;
S3: introducing JS divergence of the track confidence to calculate the association cost between the detection target Gaussian distribution model and the track target Gaussian distribution model, and matching a detection frame and the track by using greedy matching;
S4: initializing a detection target with detection confidence coefficient of matching failure larger than a threshold value as a new track, wherein the track state is an uncertain state; the uncertain state track is successfully matched within the following first expected frame number and is converted into a certain state track; if the state track matching fails, converting the state track into a lost state track; a track that loses forget oneself fails to match consecutively within the next second expected frame number is deleted.
Further, in step S1, the target object information acquired by the detector is expressed as:
Dt=[Dxt,Dyt,Dzt,Dθt,Dlt,Dwt,Dht,Dst]T
wherein, (x, y, z) is the coordinate of the center point of the three-dimensional detection frame under the laser radar coordinate system, (l, w, h) is the length, width and height of the three-dimensional detection frame, θ is the orientation angle, s is the confidence of object detection, the superscript D represents the object state as the detection state, and the subscript t represents the object representation under the discrete time t.
Further, the step S2 specifically includes the following steps:
S21: estimating the stability of the target object information acquired by the detector by using a statistic estimation method: the Euclidean distance is calculated between all detection frames acquired by a detector and a real label G t, a plurality of detection frames are extracted according to the distance to serve as samples, the stability of each component of the samples is estimated, and the stability of each component of the samples is expressed as:
wherein, Representing the variance in the x-direction of the object,/>Representing the variance in the y-direction of the object,/>Representing the variance in the z-direction of the object,/>Representing variance in the width direction of the object,/>Representing the variance in the direction of the height of the object,/>Representing the variance in the length direction of the object,/>Representing the variance in the orientation of the object. (D x,Dy,Dz,Dw,Dh,Dl,Dθ) represents the object information component acquired by the detector and (G x,Gy,Gz,Gw,Gh,Gl,Gθ) represents the information component of the object in the real tag. Diag represents a covariance matrix constructed by placing the above variance values in diagonal directions.
S22: constructing a detection target Gaussian distribution model by using corresponding target object information D t and the stability D∑t of each component of the sample aiming at different detection states;
S23: for different track states, a Kalman filter is used to predict its state at the current time And stability T∑t of each component, the state vector mean is expressed as:
Wherein, (x, y, z) is the coordinate of the central point of the three-dimensional detection frame under the laser radar coordinate system, (l, w, h) is the length, width and height of the three-dimensional detection frame, θ is the orientation angle, For the velocity component of an object in three directions, the superscript T represents the state of the object as a trajectory state, and the subscript T represents the representation of the information at discrete time T.
The stability TΣt of each component at the current time is the state at the current timeThe covariance matrix formed by the 10-dimensional vectors of (2) is predicted by a Kalman filter.
And extracting components consistent with the detection state from the state mean value and the covariance matrix to construct a track target Gaussian distribution model.
Further, the step S3 specifically includes the following steps:
S31: the JS divergence guided by the track confidence coefficient is used, and meanwhile, the difference of the detection state and the track state in the mean value and the stability of each component is considered, so that an association cost matrix is obtained;
s32: and according to the association cost matrix, matching the detection frame and the track by using greedy matching.
Further, the step S31 specifically includes: calculating a cost matrix C between the target Gaussian distribution obtained by the t moment detector and the track target Gaussian distribution at the t moment obtained by predicting the t-1 moment track through Kalman filtering:
C=Cmod·ml
C mod is a JS divergence variant between the detection target and the predicted track, and the calculation formula is as follows:
here, C JS is the original JS divergence, and the calculation method is as follows:
Wherein C KL (p||m) represents the KL divergence between the p-distribution and the m-distribution, and C KL (q|m) represents the KL divergence between the q-distribution and the m-branch.
Here, theIn order to detect the difference between the target and the predicted track orientation angle, the calculation method is as follows:
wherein, Representing the angular component of orientation in the detected state,/>Represented is the angular component of orientation in the track state.
M 1 is the mean covariance of the track target, and the calculation formula is:
Further, the step S32 specifically includes: sequencing the current detection results according to the confidence level from high to low according to the object detection confidence level at the current moment, then calculating the association cost of the detection targets and all track targets one by one, taking the minimum value C min of the association cost, and judging whether the detection targets correspond to the track targets;
If C min is smaller than the association threshold sigma 1, the track is considered to be successfully matched with the detection result, and the track is updated by the corresponding detection result; if C min is greater than the association threshold sigma 1, initializing a target with a detection confidence level greater than the new generation threshold sigma 2 as a new track; if the track cannot be matched with all the detections, the track state is changed into a lost state, and the association threshold sigma 1 and the new threshold sigma 2 are adjusted according to different tracking scenes.
The invention has the beneficial effects that:
(1) Based on the TBD framework, an MOT framework for carrying out data association by utilizing the probability distribution of the track and the detection is provided, and the influence of the uncertainty of the target track and the sensor detection on the data association is fully considered.
(2) Based on JS divergence, a new cost function is designed for measuring the similarity of two multidimensional Gaussian distributions, and the association range of the track is automatically enlarged under the condition of continuous track prediction.
(3) The divergence degree of the track is utilized to guide the data association, and the problem of mismatching possibly caused by the expansion of the track association range is solved.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objects and other advantages of the invention may be realized and obtained by means of the instrumentalities and combinations particularly pointed out in the specification.
Drawings
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in the following preferred detail with reference to the accompanying drawings, in which:
FIG. 1 is a flow chart of a 3D multi-target tracking method based on probability distribution guidance data association of the present invention;
FIG. 2 is a schematic diagram of the present invention for constructing a Gaussian distribution.
Detailed Description
Other advantages and effects of the present invention will become apparent to those skilled in the art from the following disclosure, which describes the embodiments of the present invention with reference to specific examples. The invention may be practiced or carried out in other embodiments that depart from the specific details, and the details of the present description may be modified or varied from the spirit and scope of the present invention. It should be noted that the illustrations provided in the following embodiments merely illustrate the basic idea of the present invention by way of illustration, and the following embodiments and features in the embodiments may be combined with each other without conflict.
Wherein the drawings are for illustrative purposes only and are shown in schematic, non-physical, and not intended to limit the invention; for the purpose of better illustrating embodiments of the invention, certain elements of the drawings may be omitted, enlarged or reduced and do not represent the size of the actual product; it will be appreciated by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The same or similar reference numbers in the drawings of embodiments of the invention correspond to the same or similar components; in the description of the present invention, it should be understood that, if there are terms such as "upper", "lower", "left", "right", "front", "rear", etc., that indicate an azimuth or a positional relationship based on the azimuth or the positional relationship shown in the drawings, it is only for convenience of describing the present invention and simplifying the description, but not for indicating or suggesting that the referred device or element must have a specific azimuth, be constructed and operated in a specific azimuth, so that the terms describing the positional relationship in the drawings are merely for exemplary illustration and should not be construed as limiting the present invention, and that the specific meaning of the above terms may be understood by those of ordinary skill in the art according to the specific circumstances.
Referring to fig. 1-2, the invention provides a 3D multi-target tracking method based on probability distribution guiding data association, which specifically comprises the following steps:
Step 1: the target object information at each moment is acquired by using a detector, wherein the target object information comprises position information, size information, orientation information and detection confidence. The detector is referred to as a 3D detector, which is commonly used for 3D multi-target tracking detection.
Specifically, the 3D detector is used to obtain information of the target object at each moment, namely Dt=[Dxt,Dyt,Dzt,Dθt,Dlt,Dwt,Dht,s]T, where (x, y, z) is the coordinate of the center point of the three-dimensional detection frame under the laser radar coordinate system, (l, w, h) is the length, width and height of the three-dimensional detection frame, θ is the orientation angle (THE HEADING ANGLE), s is the confidence of object detection, the superscript D represents the state of the object as the detection state, and the subscript t represents the object representation at the discrete time t.
Step 2: for a detection target, constructing a detection target gaussian distribution model by using information acquired from a detector; for a track target, estimating the state of the track target at the current moment by using a Kalman filter, and constructing a track target Gaussian distribution model.
Specifically, the detailed steps of constructing the detection target gaussian distribution model and the track target gaussian distribution model include the following:
Step 21: the stability of the 3D object information acquired by the detector is estimated using a method of statistic estimation. Calculating Euclidean distance between all detection frames obtained by the detector and a real label G t, extracting object frames with distance less than 1m as samples, estimating stability of each component of the samples, and using The representation is performed.
Step 22: as shown in fig. 2, the detector can only output object frame information D t, which is converted into a gaussian distribution model in combination with the corresponding D∑t.
Step 23: as shown in fig. 2, the dashed circles represent the prediction process. For different track states, a Kalman filter is used for predicting the state of the track at the current moment, wherein the state comprises a state average value and the stability of each component, and the state vector average value is used The stability of each component is represented by T∑t, and a Gaussian distribution model is constructed by extracting components with consistent states from the state mean and covariance matrix.
Step 3: and introducing JS divergence of the track confidence to calculate the association cost between the detection target Gaussian distribution model and the track target Gaussian distribution model, and matching the detection frame and the track by using greedy matching.
Specifically, according to the two types of gaussian distribution models obtained in the step 2, calculating the association cost between distributions by adopting JS divergence variants introducing track confidence, and matching a detection frame and a track by using greedy matching, wherein the method specifically comprises the following steps:
step 31: calculating a cost matrix C between the target Gaussian distribution obtained by the t moment detector and the track target Gaussian distribution at the t moment obtained by predicting the t-1 moment track through Kalman filtering:
C=Cmod·ml
C mod is a JS divergence variant between the detection target and the predicted track, and the calculation formula is as follows:
here, C JS is the original JS divergence, and the calculation method is as follows:
Wherein C KL (p||m) represents the KL divergence between the p-distribution and the m-distribution, and C KL (q|m) represents the KL divergence between the q-distribution and the m-branch.
Here, theIn order to detect the difference between the target and the predicted track orientation angle, the calculation method is as follows:
wherein, Representing the angular component of orientation in the detected state,/>Represented is the angular component of orientation in the track state.
M 1 is the mean covariance of the track target, and the calculation formula is:
Step 32: and sequencing the current detection result according to the confidence level from high to low according to the object detection confidence level at the current moment, then calculating the association cost of the detection and all tracks one by one, taking the minimum value of the association cost, and judging whether the detection and the tracks correspond or not.
If C min is smaller than the association threshold sigma 1, the track is considered to be successfully matched with the detection result, the corresponding detection result is used for updating the track, and if C min is larger than the association threshold sigma 1, the target with the detection confidence coefficient larger than the new generation threshold sigma 2 is initialized to be a new track. If the track cannot be matched with all the detections, the track state is changed into a lost state, and the association threshold sigma 1 and the new threshold sigma 2 are adjusted according to different tracking scenes.
Step 4: initializing a detection target with detection confidence coefficient of matching failure larger than a threshold value as a new track, wherein the track state is an uncertain state; the uncertain state track is successfully matched within the following first expected frame number and is converted into a certain state track; if the state track matching fails, converting the state track into a lost state track; a track that loses forget oneself fails to match consecutively within the next second expected frame number is deleted.
Specifically, as a result of a large number of experiments, the first expected frame number is preferably 3 frames, and the second expected frame number is preferably 12 frames. I.e. for a determined state trajectory, if the next match is successful for 3 frames, then turning to a determined state trajectory; for a missing state track, if the next 12 consecutive frames do not match, the track is deleted.
In summary, the invention provides a MOT framework for carrying out data association by utilizing the probability distribution of the track and the detection based on the TBD framework, and fully considers the influence of the uncertainty of the target track and the sensor detection on the data association. The invention also designs a new cost function for measuring the similarity of two multidimensional Gaussian distributions based on JS divergence, and realizes the automatic expansion of the association range of the track under the condition of continuous track prediction. The invention guides the data association by utilizing the divergence degree of the track, and solves the problem of error matching possibly caused by the expansion of the track association range.
Finally, it is noted that the above embodiments are only for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made thereto without departing from the spirit and scope of the present invention, which is intended to be covered by the claims of the present invention.
Claims (6)
1. The 3D multi-target tracking method based on probability distribution guiding data association is characterized by comprising the following steps of:
S1: acquiring target object information at each moment by using a detector, wherein the target object information comprises position information, size information, orientation information and detection confidence;
S2: for a detection target, constructing a detection target gaussian distribution model by using information acquired from a detector; for a track target, estimating the state of the track target at the current moment by using a Kalman filter, and constructing a track target Gaussian distribution model;
S3: introducing JS divergence of the track confidence to calculate the association cost between the detection target Gaussian distribution model and the track target Gaussian distribution model, and matching a detection frame and the track by using greedy matching;
S4: initializing a detection target with detection confidence coefficient of matching failure larger than a threshold value as a new track, wherein the track state is an uncertain state; the uncertain state track is successfully matched within the following first expected frame number and is converted into a certain state track; if the state track matching fails, converting the state track into a lost state track; a track that loses forget oneself fails to match consecutively within the next second expected frame number is deleted.
2. The method according to claim 1, wherein in step S1, the target object information acquired by the detector is represented as:
Dt=[Dxt,Dyt,Dzt,Dθt,Dlt,Dwt,Dht,Dst]T
wherein, (x, y, z) is the coordinate of the center point of the three-dimensional detection frame under the laser radar coordinate system, (l, w, h) is the length, width and height of the three-dimensional detection frame, θ is the orientation angle, s is the confidence of object detection, the superscript D represents the object state as the detection state, and the subscript t represents the object representation under the discrete time t.
3. A 3D multi-objective tracking method based on probability distribution guide data association according to claim 1, wherein: the step S2 specifically comprises the following steps:
S21: estimating the stability of the target object information acquired by the detector by using a statistic estimation method: the Euclidean distance is calculated between all detection frames acquired by a detector and a real label G t, a plurality of detection frames are extracted according to the distance to serve as samples, the stability of each component of the samples is estimated, and the stability of each component of the samples is expressed as:
wherein, Representing the variance in the x-direction of the object,/>Representing the variance in the y-direction of the object,/>Representing the variance in the z-direction of the object,/>Representing variance in the width direction of the object,/>Representing the variance in the direction of the height of the object,/>Representing the variance in the length direction of the object,/>Representing the variance in the orientation of the object; (D x,Dy,Dz,Dw,Dh,Dl,Dθ) represents the object information component acquired by the detector, (G x,Gy,Gz,Gw,Gh,Gl,Gθ) represents the information component of the object in the real tag; diag represents a covariance matrix constructed by placing the above variance values in diagonal directions;
S22: constructing a detection target Gaussian distribution model by using corresponding target object information D t and sample stability D∑t according to different detection states;
S23: for different track states, a Kalman filter is used to predict its state at the current time And stability T∑t of each component, the state vector mean is expressed as:
Wherein, (x, y, z) is the coordinate of the central point of the three-dimensional detection frame under the laser radar coordinate system, (l, w, h) is the length, width and height of the three-dimensional detection frame, θ is the orientation angle, For velocity components of an object in three directions, the superscript T represents the state of the object as a track state, and the subscript T represents the representation of the object at discrete time T;
The stability T∑t of each component at the current time is the state at the current time The covariance matrix formed by the vectors is predicted by a Kalman filter;
and extracting components consistent with the detection state from the state mean value and the covariance matrix to construct a track target Gaussian distribution model.
4. A 3D multi-objective tracking method based on probability distribution guide data association according to claim 3, wherein step S3 specifically comprises the steps of:
S31: the JS divergence guided by the track confidence coefficient is used, and meanwhile, the difference of the detection state and the track state in the mean value and the stability of each component is considered, so that an association cost matrix is obtained;
s32: and according to the association cost matrix, matching the detection frame and the track by using greedy matching.
5. The method for 3D multi-objective tracking based on probability distribution guide data association according to claim 4, wherein step S31 specifically comprises: calculating a cost matrix C between the target Gaussian distribution obtained by the t moment detector and the track target Gaussian distribution at the t moment obtained by predicting the t-1 moment track through Kalman filtering:
C=Cmod·ml
C mod is a JS divergence variant between the detection target and the predicted track, and the calculation formula is as follows:
here, C JS is the original JS divergence, and the calculation method is as follows:
Wherein C KL (p||m) represents the KL divergence between the p distribution and the m distribution, and C KL (q|m) represents the KL divergence between the q distribution and the m branches;
Here, the In order to detect the difference between the target and the predicted track orientation angle, the calculation method is as follows:
wherein, Representing the angular component of orientation in the detected state,/>Represented is the angular component of orientation in the track state;
m 1 is the mean covariance of the track target, and the calculation formula is:
6. The method for 3D multi-objective tracking based on probability distribution guide data association according to claim 5, wherein step S32 specifically comprises: sequencing the current detection results according to the confidence level from high to low according to the object detection confidence level at the current moment, then calculating the association cost of the detection targets and all track targets one by one, taking the minimum value C min of the association cost, and judging whether the detection targets correspond to the track targets;
If C min is smaller than the association threshold sigma 1, the track is considered to be successfully matched with the detection result, and the track is updated by the corresponding detection result; if C min is greater than the association threshold sigma 1, initializing a target with a detection confidence level greater than the new generation threshold sigma 2 as a new track; if the track cannot be matched with all the detections, the track state is changed into a lost state, and the association threshold sigma 1 and the new threshold sigma 2 are adjusted according to different tracking scenes.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410061124.4A CN117934549B (en) | 2024-01-16 | 2024-01-16 | 3D multi-target tracking method based on probability distribution guiding data association |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410061124.4A CN117934549B (en) | 2024-01-16 | 2024-01-16 | 3D multi-target tracking method based on probability distribution guiding data association |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117934549A true CN117934549A (en) | 2024-04-26 |
CN117934549B CN117934549B (en) | 2024-07-09 |
Family
ID=90755146
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410061124.4A Active CN117934549B (en) | 2024-01-16 | 2024-01-16 | 3D multi-target tracking method based on probability distribution guiding data association |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117934549B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113269098A (en) * | 2021-05-27 | 2021-08-17 | 中国人民解放军军事科学院国防科技创新研究院 | Multi-target tracking positioning and motion state estimation method based on unmanned aerial vehicle |
CN114332158A (en) * | 2021-12-17 | 2022-04-12 | 重庆大学 | 3D real-time multi-target tracking method based on camera and laser radar fusion |
CN114638855A (en) * | 2022-01-21 | 2022-06-17 | 山东汇创信息技术有限公司 | Multi-target tracking method, equipment and medium |
US20230034973A1 (en) * | 2021-07-29 | 2023-02-02 | Aptiv Technologies Limited | Methods and Systems for Predicting Trajectory Data of an Object |
CN116128932A (en) * | 2023-04-18 | 2023-05-16 | 无锡学院 | Multi-target tracking method |
CN116385493A (en) * | 2023-04-11 | 2023-07-04 | 中科博特智能科技(安徽)有限公司 | Multi-moving-object detection and track prediction method in field environment |
CN117036397A (en) * | 2023-06-14 | 2023-11-10 | 浙江大学 | Multi-target tracking method based on fusion information association and camera motion compensation |
-
2024
- 2024-01-16 CN CN202410061124.4A patent/CN117934549B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113269098A (en) * | 2021-05-27 | 2021-08-17 | 中国人民解放军军事科学院国防科技创新研究院 | Multi-target tracking positioning and motion state estimation method based on unmanned aerial vehicle |
US20230034973A1 (en) * | 2021-07-29 | 2023-02-02 | Aptiv Technologies Limited | Methods and Systems for Predicting Trajectory Data of an Object |
CN114332158A (en) * | 2021-12-17 | 2022-04-12 | 重庆大学 | 3D real-time multi-target tracking method based on camera and laser radar fusion |
CN114638855A (en) * | 2022-01-21 | 2022-06-17 | 山东汇创信息技术有限公司 | Multi-target tracking method, equipment and medium |
CN116385493A (en) * | 2023-04-11 | 2023-07-04 | 中科博特智能科技(安徽)有限公司 | Multi-moving-object detection and track prediction method in field environment |
CN116128932A (en) * | 2023-04-18 | 2023-05-16 | 无锡学院 | Multi-target tracking method |
CN117036397A (en) * | 2023-06-14 | 2023-11-10 | 浙江大学 | Multi-target tracking method based on fusion information association and camera motion compensation |
Non-Patent Citations (2)
Title |
---|
任珈民;宫宁生;韩镇阳;: "基于YOLOv3与卡尔曼滤波的多目标跟踪算法", 计算机应用与软件, no. 05, 12 May 2020 (2020-05-12) * |
刘玉杰;窦长红;赵其鲁;李宗民;: "基于状态预测和运动结构的在线多目标跟踪", 计算机辅助设计与图形学学报, no. 02, 15 February 2018 (2018-02-15) * |
Also Published As
Publication number | Publication date |
---|---|
CN117934549B (en) | 2024-07-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7929730B2 (en) | Method and system for object detection and tracking | |
US9411037B2 (en) | Calibration of Wi-Fi localization from video localization | |
CN108470355B (en) | Target tracking method fusing convolution network characteristics and discriminant correlation filter | |
Gostar et al. | Multi-Bernoulli sensor control via minimization of expected estimation errors | |
Lu et al. | Heterogeneous multi-task learning for multiple pseudo-measurement estimation to bridge GPS outages | |
CN111427047A (en) | Autonomous mobile robot S L AM method in large scene | |
US20220168900A1 (en) | Visual positioning method and system based on gaussian process, and storage medium | |
Baser et al. | A novel auxiliary particle PHD filter | |
Xia et al. | Extended object tracking with automotive radar using learned structural measurement model | |
CN114137562B (en) | Multi-target tracking method based on improved global nearest neighbor | |
CN108519595A (en) | Joint multisensor is registrated and multi-object tracking method | |
CN113280821B (en) | Underwater multi-target tracking method based on slope constraint and backtracking search | |
Kim et al. | Online visual multi-object tracking via labeled random finite set filtering | |
CN111007880A (en) | Extended target tracking method based on automobile radar | |
CN115482252A (en) | Motion constraint-based SLAM closed loop detection and pose graph optimization method | |
CN117934549B (en) | 3D multi-target tracking method based on probability distribution guiding data association | |
CN117433538A (en) | Multi-source heterogeneous sensor track fusion method | |
Arnaud et al. | Partial linear gaussian models for tracking in image sequences using sequential monte carlo methods | |
Yu et al. | Track-before-detect labeled multi-Bernoulli smoothing for multiple extended objects | |
Reuter et al. | Methods to model the motion of extended objects in multi-object Bayes filters | |
CN114440875A (en) | Gravity matching method based on probabilistic neural network | |
Awal et al. | A Particle Filter Based Visual Object Tracking: A Systematic Review of Current Trends and Research Challenges. | |
CN107590509B (en) | Cherenov fusion method based on maximum expectation approximation | |
Stringer et al. | Improving Predictive Navigation Through the Optimization of Counterfactual Track Evaluation | |
KR102589987B1 (en) | Method and Apparatus for Tracking of Online Multi-Object with Visual and Radar Features |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |