CN113077495A - Online multi-target tracking method, system, computer equipment and readable storage medium - Google Patents

Online multi-target tracking method, system, computer equipment and readable storage medium Download PDF

Info

Publication number
CN113077495A
CN113077495A CN202010009050.1A CN202010009050A CN113077495A CN 113077495 A CN113077495 A CN 113077495A CN 202010009050 A CN202010009050 A CN 202010009050A CN 113077495 A CN113077495 A CN 113077495A
Authority
CN
China
Prior art keywords
bounding box
box information
information
target bounding
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010009050.1A
Other languages
Chinese (zh)
Other versions
CN113077495B (en
Inventor
李梓龙
周鹏
钟国旗
郭继舜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Automobile Group Co Ltd
Original Assignee
Guangzhou Automobile Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Automobile Group Co Ltd filed Critical Guangzhou Automobile Group Co Ltd
Priority to CN202010009050.1A priority Critical patent/CN113077495B/en
Publication of CN113077495A publication Critical patent/CN113077495A/en
Application granted granted Critical
Publication of CN113077495B publication Critical patent/CN113077495B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention discloses an online multi-target tracking method based on a cost function, which comprises the following steps: acquiring scene information of a current frame and detected target bounding box information; judging whether a track exists or not, and if not, allocating a tracking identifier and a Kalman filter to the detected target bounding box information to form a new track; if so, predicting the position information of the bounding box of the current frame according to the Kalman filter corresponding to the existing track; according to the scene information, constructing a similarity matrix between the detected target bounding box information and the predicted target bounding box information; the similarity matrix is distributed according to the Hungarian algorithm to generate a data association result; and updating the track according to the data association result. The invention also discloses an online multi-target tracking system based on the cost function, computer equipment and a computer readable storage medium. By adopting the method and the device, the data can be subjected to correlation optimization, different types of targets can be tracked, and the tracking stability and accuracy can be improved.

Description

Online multi-target tracking method, system, computer equipment and readable storage medium
Technical Field
The invention relates to the technical field of intelligent tracking, in particular to an online multi-target tracking method based on a cost function, an online multi-target tracking system based on the cost function, computer equipment and a computer readable storage medium.
Background
The multi-target tracking method is generally divided into an online method and an offline method. Among them, the off-line method is to recover the track information of each target in the video sequence by analyzing the information of all frames, and usually solves the problem of data association in the framework of network flow. Most of the online methods analyze information of two frames before and after, and associate the obtained detection track with the original track, and the common method is bipartite graph matching.
In the field of intelligent vehicles, the multi-target tracking method is an online method because the real-time performance needs to be guaranteed in tracking. Wherein, the basic tracking algorithm of the online method is tracking-by-detection, and the whole process is divided into two stages: the first stage comprises object detection, obtaining a bounding box video sequence of the object of interest in each frame; the second phase is a data association phase, which is the most critical step in multi-target tracking.
In the data association of multi-target tracking, various methods such as a nearest neighbor algorithm, joint probability data interconnection (JPDA), multi-hypothesis tracking (MHT), a cost function and the like can be adopted. The nearest neighbor data association algorithm has small operand and is easy to realize hardware, but can only be applied to a target tracking system in a sparse target and clutter environment, when the target or clutter density is high, false alarm and leakage are easy to occur, and meanwhile, the algorithm tracking performance is not high; the JPDA can be well adapted to data association in a clutter environment, but a combined explosion phenomenon occurs when the number of tracking targets is increased, so that the calculation amount is greatly increased; MHT combines the advantages of JPDA and nearest neighbor algorithm, but relies on prior information too much, and has limitation when tracking obstacles on intelligent automobiles with frequently changed road conditions and weather environments; in the design of the cost function, the problem that the target is shielded for a long time in the tracking period is usually solved by combining the position of a bounding box of an obstacle in an image and various statistics in the image or combining global information.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a cost function-based online multi-target tracking method, a cost function-based online multi-target tracking system, a computer device and a computer-readable storage medium, which can perform correlation optimization on data, track different types of targets and improve the tracking stability and accuracy.
In order to solve the technical problem, the invention provides an online multi-target tracking method based on a cost function, which comprises the following steps: s1, acquiring scene information of the current frame and detected target bounding box information; s2, judging whether a track exists or not, and if not, allocating a tracking identifier and a Kalman filter to the detected target bounding box information to form a new track, and returning to the step S1; if so, predicting the position information of the bounding box of the current frame according to the Kalman filter corresponding to the existing track; s3, constructing a similarity matrix between the detected target bounding box information and the predicted target bounding box information according to the scene information; s4, distributing the similarity matrix according to the Hungarian algorithm to generate a data association result; s5, updating the track according to the data association result, and returning to the step S1.
As an improvement of the above scheme, the method for detecting the target bounding box information includes: and detecting the target bounding box information in the current frame according to a pre-trained detector model.
As a modification of the above, the step S3 includes: calculating similarity characteristic cost between the predicted target bounding box information and the detected target bounding box information, wherein the cost is env × appearance + (1-env) × clx bbox, env is an influence factor of scene information, the appearance characteristic of a target is the envolopment characteristic of the target, cls is category information, and bbox is the overlapping degree of the target bounding box; and constructing a similarity matrix according to the similarity characteristics.
As an improvement of the above scheme, the value of the impact factor is determined according to scene information.
As a modification of the above, the step S5 includes: when the predicted target bounding box information is matched with the detected target bounding box information, updating the detected target bounding box information into the existing track through a Kalman filter; deleting the predicted target bounding box information when the predicted target bounding box information does not match the detected target bounding box information; and when the detected target bounding box information is not matched with the predicted target bounding box information, distributing a tracking identifier and a Kalman filter to the detected target bounding box information to form a new track.
Correspondingly, the invention also provides an online multi-target tracking system based on the cost function, which comprises the following components: the acquisition module is used for acquiring scene information of a current frame and detected target bounding box information; the judging module is used for judging whether a track exists or not, when the judgment is negative, a tracking identifier and a Kalman filter are distributed to the detected target bounding box information to form a new track, and when the judgment is positive, the bounding box position information of the current frame is predicted according to the Kalman filter corresponding to the existing track; the construction module is used for constructing a similarity matrix between the detected target bounding box information and the predicted target bounding box information according to the scene information; the association module is used for allocating the similarity matrix according to the Hungarian algorithm to generate a data association result; and the updating module is used for updating the track according to the data association result.
As an improvement of the above solution, the building block includes: the calculating unit is used for calculating a similarity characteristic cost between the predicted target bounding box information and the detected target bounding box information, wherein the cost is env × AN _ SNpearance + (1-env) × cls × bbox, env is an influence factor of scene information, AN _ SNpearance is an appearance characteristic of a target, cls is category information, and bbox is an overlapping degree of the target bounding box; and the construction unit is used for constructing a similarity matrix according to the similarity characteristics.
As an improvement of the above solution, the update module includes: a first matching unit, configured to update the detected target bounding box information to an existing trajectory through a kalman filter when the predicted target bounding box information matches the detected target bounding box information; a second matching unit configured to delete the predicted target bounding box information when the predicted target bounding box information does not match the detected target bounding box information; and the third matching unit is used for distributing a tracking identifier and a Kalman filter to the detected target bounding box information to form a new track when the detected target bounding box information is not matched with the predicted target bounding box information.
Correspondingly, the invention also provides computer equipment which comprises a memory and a processor, wherein the memory stores a computer program, and the processor executes the steps of the online multi-target tracking method.
Accordingly, the present invention also provides a computer readable storage medium having stored thereon a computer program which, when being executed by a processor, carries out the steps of the online multi-target tracking method.
The implementation of the invention has the following beneficial effects:
the method and the device combine scene information of the target, two-dimensional characteristics of a target surrounding frame and the like, design a unique cost function to calculate the similarity of the tracked target, greatly simplify the operation by performing correlation optimization on data, realize different types of calculation of different similarities, can be used for tracking different types of targets such as people, non-motor vehicles, vehicles and the like, improve the tracking stability and accuracy, reduce the change of tracking identification and improve the accurate obstacle situation for decision making.
Drawings
FIG. 1 is a flow chart of the cost function based on-line multi-target tracking method of the present invention;
FIG. 2 is a schematic structural diagram of the cost function-based online multi-target tracking system of the present invention;
FIG. 3 is a schematic structural diagram of a building module in the cost function-based online multi-target tracking system of the present invention;
FIG. 4 is a schematic structural diagram of an update module in the cost function-based online multi-target tracking system according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 shows a flowchart of the cost function-based online multi-target tracking method of the present invention, which includes:
and S1, acquiring scene information of the current frame and detected target bounding box information.
The scene information is collected by a map. Specifically, the map may provide a current road condition for determining scene information of the current frame, where the scene information includes, but is not limited to, an urban area, a high speed, an expressway, and others.
And the target enclosure frame information is detected by a camera. Specifically, the method for detecting the target bounding box information includes: and detecting the target bounding box information in the current frame according to a pre-trained detector model. The detector model may be a YOLO V3 detection model, but is not limited thereto.
S2, judging whether a track exists or not, if not, distributing a tracking identifier and a Kalman filter to the detected target bounding box information to form a new track, and returning to the step S1 to receive the scene information of the next frame and the detected target bounding box information; if yes, the surrounding frame position information of the current frame is predicted by the kalman filter corresponding to the existing trajectory, and the process proceeds to step S3.
It should be noted that the kalman filter is a recursive filter for a time-varying linear system. Where a time-varying linear system can be described by a differential equation model containing orthogonal state variables, the kalman filter is a filter that incorporates past measurement estimation errors into new measurement errors to estimate future errors.
The motion model of the moving object (such as a pedestrian and a vehicle) between two frames (a current frame and a next frame) is a constant-speed motion model. Therefore, the calculation is effectively simplified through the Kalman filter, and the efficiency is improved.
And S3, constructing a similarity matrix between the detected target bounding box information and the predicted target bounding box information according to the scene information.
Specifically, the step S3 includes:
(1) according to the formula cost ═ env × apearance + (1-env) × cls × bbox, a similarity feature cost between the predicted target bounding box information and the detected target bounding box information is calculated.
Wherein env is an influence factor of the scene information; it should be noted that the value of the impact factor is determined according to the scene information. For example, when the scene information is a city region, the value of the impact factor is 0.7; when the scene information is high speed, the value of the influence factor is 0.2; when the scene information is the expressway, the value of the influence factor is 0.2; when the scene information is other, the value of the influence factor is 0.5. But not limited thereto, and the skilled person can make adjustments according to the actual situation.
The apearance is the appearance characteristic of the target and can be used for comparing the appearance similarity of objects of the same type. The method is characterized in that 3 classes of cosine measurements are trained based on Re-ID data sets of people, non-motor vehicles and vehicles respectively and are used for calculating the similarity of similar objects;
cls is category information which comprises people, non-motor vehicles and vehicles;
bbox is the overlapping degree of the target bounding box, and is used for representing the IOU overlapping degree between the detected target bounding box information and the predicted target bounding box information.
(2) And constructing a similarity matrix according to the similarity characteristics.
It should be noted that the similarity matrix is obtained by calculating a similarity characteristic between the detected target bounding box information and the predicted target bounding box information.
And S4, distributing the similarity matrix according to the Hungarian algorithm to generate a data association result.
And S5, updating the track according to the data association result, and returning to the step S1 to receive the scene information of the next frame and the detected target bounding box information.
Specifically, the step S5 includes:
(1) when the predicted target bounding box information is matched with the detected target bounding box information, updating the detected target bounding box information into the existing track through a Kalman filter;
(2) deleting the predicted target bounding box information when the predicted target bounding box information does not match the detected target bounding box information;
(3) and when the detected target bounding box information is not matched with the predicted target bounding box information, distributing a tracking identifier and a Kalman filter to the detected target bounding box information to form a new track.
It can be seen from the above that the method cancels the dependency on the three-dimensional characteristics, only combines the scene information, the category information, the appearance characteristics, the target enclosure frame and other two-dimensional characteristics of the target, designs a unique cost function (i.e., cost ═ env × AN _ SNpearance + (1-env) × clx bbox) to calculate the similarity of the tracked target, greatly simplifies the operation, improves the tracking stability and accuracy, reduces the change of the tracking identifier, and improves the accurate obstacle situation for decision making.
Referring to fig. 2, fig. 2 shows a specific structure of the cost function-based online multi-target tracking system 100 of the present invention, which includes:
and the obtaining module 1 is used for obtaining scene information of the current frame and detected target bounding box information. Wherein the scene information is collected by a map; the target bounding box information is detected by the camera according to a pre-trained detector model, which may be, but is not limited to, a YOLO V3 detection model.
And the judging module 2 is used for judging whether a track exists or not, when the judgment is negative, allocating a tracking identifier and a Kalman filter to the detected target bounding box information to form a new track, and when the judgment is positive, predicting the bounding box position information of the current frame according to the Kalman filter corresponding to the existing track. It should be noted that, when the judging module 2 judges that no track exists, a new track is reconstructed, and the obtaining module 1 is driven to perform a new round of processing; when the judging module 2 judges that the track exists, the track of the current frame is predicted according to the existing track, and the building module is driven to carry out the next processing.
And the constructing module 3 is used for constructing a similarity matrix between the detected target bounding box information and the predicted target bounding box information according to the scene information.
And the association module 4 is used for allocating the similarity matrix according to the Hungarian algorithm to generate a data association result.
And the updating module 5 is used for updating the track according to the data association result. And after the updating module 5 finishes the track updating operation, driving the acquiring module 1 again to perform a new round of processing.
Therefore, the method and the device perform similarity calculation according to the scene information, the target enclosure frame information and other two-dimensional data, and improve the tracking stability by performing correlation optimization on the data.
As shown in fig. 3, the building block 3 includes:
a calculating unit 31, configured to calculate a similarity feature cost between the predicted target bounding box information and the detected target bounding box information.
Wherein cost is env × apearance + (1-env) × cls × bbox;
env is an influence factor of the scene information; it should be noted that the value of the impact factor is determined according to the scene information. For example, when the scene information is a city region, the value of the impact factor is 0.7; when the scene information is high speed, the value of the influence factor is 0.2; when the scene information is the expressway, the value of the influence factor is 0.2; when the scene information is other, the value of the influence factor is 0.5. But not limited thereto, and the skilled person can make adjustments according to the actual situation.
The apearance is the appearance characteristic of the target and can be used for comparing the appearance similarity of objects of the same type. The method is characterized in that 3 classes of cosine measurements are trained based on Re-ID data sets of people, non-motor vehicles and vehicles respectively and are used for calculating the similarity of similar objects;
cls is category information which comprises people, non-motor vehicles and vehicles;
bbox is the overlapping degree of the target bounding box, and is used for representing the IOU overlapping degree between the detected target bounding box information and the predicted target bounding box information.
And the constructing unit 32 is used for constructing a similarity matrix according to the similarity characteristics.
Therefore, the building module 3 cancels the dependency on the three-dimensional features, and only combines the scene information, the category information, the appearance feature, the object bounding box and other two-dimensional features of the object, and designs a unique cost function (i.e., cost ═ env × appearance + (1-env) × clx bbox) to calculate the similarity of the tracked object, thereby greatly simplifying the operation.
As shown in fig. 4, the update module 5 includes:
a first matching unit 51 configured to update the detected target bounding box information into an existing trajectory through a kalman filter when the predicted target bounding box information matches the detected target bounding box information;
a second matching unit 52 configured to delete the predicted target bounding box information when the predicted target bounding box information does not match the detected target bounding box information;
a third matching unit 53, configured to, when the detected target bounding box information does not match the predicted target bounding box information, allocate a tracking identifier and a kalman filter to the detected target bounding box information to form a new trajectory.
According to the method, the unique cost function is designed to calculate the similarity of the tracked target by combining the scene information, the category information, the appearance characteristic, the target surrounding frame and other two-dimensional characteristics of the target, so that the operation is greatly simplified, the method can be used for tracking different types of targets such as people, non-motor vehicles, vehicles and the like, and the accurate obstacle situation is improved for decision making.
Correspondingly, the invention also provides computer equipment which comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the steps of the online multi-target tracking method when executing the computer program. Meanwhile, the invention also provides a computer readable storage medium, on which a computer program is stored, wherein the computer program realizes the steps of the online multi-target tracking method when being executed by a processor.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention.

Claims (10)

1. An online multi-target tracking method based on a cost function is characterized by comprising the following steps:
s1, acquiring scene information of the current frame and detected target bounding box information;
s2, judging whether the track exists or not,
if not, distributing a tracking identifier and a Kalman filter to the detected target bounding box information to form a new track, and returning to the step S1;
if so, predicting the position information of the bounding box of the current frame according to the Kalman filter corresponding to the existing track;
s3, constructing a similarity matrix between the detected target bounding box information and the predicted target bounding box information according to the scene information;
s4, distributing the similarity matrix according to the Hungarian algorithm to generate a data association result;
s5, updating the track according to the data association result, and returning to the step S1.
2. The cost function-based online multi-target tracking method according to claim 1, wherein the target bounding box information detection method comprises: and detecting the target bounding box information in the current frame according to a pre-trained detector model.
3. The cost function-based online multi-target tracking method according to claim 1, wherein the step S3 includes:
calculating similarity characteristic cost between the predicted target bounding box information and the detected target bounding box information, wherein the cost is env × appearance + (1-env) × clx bbox, env is an influence factor of scene information, the appearance characteristic of a target is the envolopment characteristic of the target, cls is category information, and bbox is the overlapping degree of the target bounding box;
and constructing a similarity matrix according to the similarity characteristics.
4. The cost function-based online multi-target tracking method of claim 3, wherein the value of the impact factor is determined according to scene information.
5. The cost function-based online multi-target tracking method according to claim 1, wherein the step S5 includes:
when the predicted target bounding box information is matched with the detected target bounding box information, updating the detected target bounding box information into the existing track through a Kalman filter;
deleting the predicted target bounding box information when the predicted target bounding box information does not match the detected target bounding box information;
and when the detected target bounding box information is not matched with the predicted target bounding box information, distributing a tracking identifier and a Kalman filter to the detected target bounding box information to form a new track.
6. An online multi-target tracking system based on a cost function, comprising:
the acquisition module is used for acquiring scene information of a current frame and detected target bounding box information;
the judging module is used for judging whether a track exists or not, when the judgment is negative, a tracking identifier and a Kalman filter are distributed to the detected target bounding box information to form a new track, and when the judgment is positive, the bounding box position information of the current frame is predicted according to the Kalman filter corresponding to the existing track;
the construction module is used for constructing a similarity matrix between the detected target bounding box information and the predicted target bounding box information according to the scene information;
the association module is used for allocating the similarity matrix according to the Hungarian algorithm to generate a data association result;
and the updating module is used for updating the track according to the data association result.
7. The cost function based online multi-target tracking system of claim 6, wherein the construction module comprises:
the calculating unit is used for calculating a similarity characteristic cost between the predicted target bounding box information and the detected target bounding box information, wherein the cost is env × AN _ SNpearance + (1-env) × cls × bbox, env is an influence factor of scene information, AN _ SNpearance is an appearance characteristic of a target, cls is category information, and bbox is an overlapping degree of the target bounding box;
and the construction unit is used for constructing a similarity matrix according to the similarity characteristics.
8. The cost function based online multi-target tracking system of claim 6, wherein the update module comprises:
a first matching unit, configured to update the detected target bounding box information to an existing trajectory through a kalman filter when the predicted target bounding box information matches the detected target bounding box information;
a second matching unit configured to delete the predicted target bounding box information when the predicted target bounding box information does not match the detected target bounding box information;
and the third matching unit is used for distributing a tracking identifier and a Kalman filter to the detected target bounding box information to form a new track when the detected target bounding box information is not matched with the predicted target bounding box information.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 5.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 5.
CN202010009050.1A 2020-01-06 2020-01-06 Online multi-target tracking method, system, computer equipment and readable storage medium Active CN113077495B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010009050.1A CN113077495B (en) 2020-01-06 2020-01-06 Online multi-target tracking method, system, computer equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010009050.1A CN113077495B (en) 2020-01-06 2020-01-06 Online multi-target tracking method, system, computer equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN113077495A true CN113077495A (en) 2021-07-06
CN113077495B CN113077495B (en) 2023-01-31

Family

ID=76608740

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010009050.1A Active CN113077495B (en) 2020-01-06 2020-01-06 Online multi-target tracking method, system, computer equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN113077495B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113687341A (en) * 2021-08-16 2021-11-23 山东沂蒙交通发展集团有限公司 Holographic intersection sensing method based on multi-source sensor
CN114897944A (en) * 2021-11-10 2022-08-12 北京中电兴发科技有限公司 Multi-target continuous tracking method based on DeepSORT

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101777187A (en) * 2010-01-15 2010-07-14 西安电子科技大学 Video microscopic image cell automatic tracking method based on Meanshift arithmetic
CN110288627A (en) * 2019-05-22 2019-09-27 江苏大学 One kind being based on deep learning and the associated online multi-object tracking method of data
US10445885B1 (en) * 2015-10-01 2019-10-15 Intellivision Technologies Corp Methods and systems for tracking objects in videos and images using a cost matrix
CN110415277A (en) * 2019-07-24 2019-11-05 中国科学院自动化研究所 Based on light stream and the multi-target tracking method of Kalman filtering, system, device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101777187A (en) * 2010-01-15 2010-07-14 西安电子科技大学 Video microscopic image cell automatic tracking method based on Meanshift arithmetic
US10445885B1 (en) * 2015-10-01 2019-10-15 Intellivision Technologies Corp Methods and systems for tracking objects in videos and images using a cost matrix
CN110288627A (en) * 2019-05-22 2019-09-27 江苏大学 One kind being based on deep learning and the associated online multi-object tracking method of data
CN110415277A (en) * 2019-07-24 2019-11-05 中国科学院自动化研究所 Based on light stream and the multi-target tracking method of Kalman filtering, system, device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
娄康等: ""基于目标运动特征的红外目标检测与跟踪方法"", 《南京理工大学学报》 *
杨阳: ""复杂场景下目标跟踪算法的研究"", 《中国优秀硕士学位论文全文数据库(信息科技辑)》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113687341A (en) * 2021-08-16 2021-11-23 山东沂蒙交通发展集团有限公司 Holographic intersection sensing method based on multi-source sensor
CN114897944A (en) * 2021-11-10 2022-08-12 北京中电兴发科技有限公司 Multi-target continuous tracking method based on DeepSORT
CN114897944B (en) * 2021-11-10 2022-10-25 北京中电兴发科技有限公司 Multi-target continuous tracking method based on DeepSORT

Also Published As

Publication number Publication date
CN113077495B (en) 2023-01-31

Similar Documents

Publication Publication Date Title
CN111488795B (en) Real-time pedestrian tracking method applied to unmanned vehicle
CN109521756B (en) Obstacle motion information generation method and apparatus for unmanned vehicle
Badenas et al. Motion-based segmentation and region tracking in image sequences
CN112154481B (en) Target tracking based on multiple measurement hypotheses
JP4467838B2 (en) Image recognition apparatus and image recognition method
JP4984659B2 (en) Own vehicle position estimation device
CN111222568A (en) Vehicle networking data fusion method and device
KR102168288B1 (en) System and method for tracking multiple object using multi-LiDAR
Broßeit et al. Probabilistic rectangular-shape estimation for extended object tracking
CN113077495B (en) Online multi-target tracking method, system, computer equipment and readable storage medium
JP2021026644A (en) Article detection apparatus, article detection method, and article-detecting computer program
CN116299500B (en) Laser SLAM positioning method and device integrating target detection and tracking
CN116403139A (en) Visual tracking and positioning method based on target detection
CN110426714B (en) Obstacle identification method
Vu et al. Grid-based localization and online mapping with moving objects detection and tracking: new results
CN114002667A (en) Multi-neighbor extended target tracking algorithm based on random matrix method
Dey et al. Robust perception architecture design for automotive cyber-physical systems
Du et al. Particle filter based object tracking of 3D sparse point clouds for autopilot
CN111832343B (en) Tracking method and device, and storage medium
CN110781730A (en) Intelligent driving sensing method and sensing device
Stellet et al. Post processing of laser scanner measurements for testing advanced driver assistance systems
CN113227713A (en) Method and system for generating environment model for positioning
CN115236672A (en) Obstacle information generation method, device, equipment and computer readable storage medium
Lindenmaier et al. Comparison of sensor data fusion algorithms for automotive perception system
Ronecker et al. Dynamic Occupancy Grids for Object Detection: A Radar-Centric Approach

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant